Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-18 Thread via GitHub


junrao commented on PR #16347:
URL: https://github.com/apache/kafka/pull/16347#issuecomment-2176493383

   > Maybe someone can explain more about what 
https://github.com/apache/kafka/pull/15673 does exactly.
   
   @cmccabe : The main issue that https://github.com/apache/kafka/pull/15673 
fixes 
is described in https://issues.apache.org/jira/browse/KAFKA-16480. 
   
   ```
   https://issues.apache.org/jira/browse/KAFKA-16154 introduced the changes to 
the ListOffsets API to accept latest-tiered-timestamp and return the 
corresponding offset.
   
   Those changes should have a) increased the version of the ListOffsets API b) 
increased the inter-broker protocol version c) hidden the latest version of the 
ListOffsets behind the latestVersionUnstable flag
   ```
   
   So, let's hold off on merging this PR until we understand how to consolidate 
it with https://github.com/apache/kafka/pull/15673.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16753: Implement share acknowledge API in partition (KIP-932) [kafka]

2024-06-18 Thread via GitHub


apoorvmittal10 commented on PR #16339:
URL: https://github.com/apache/kafka/pull/16339#issuecomment-2176483247

   @omkreddy Build completed with unrelated tests failure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16936 : Upgrade slf4j, sys property for provider [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on code in PR #16324:
URL: https://github.com/apache/kafka/pull/16324#discussion_r1644732202


##
build.gradle:
##
@@ -928,14 +928,15 @@ project(':server') {
 implementation libs.jacksonDatabind
 
 implementation libs.slf4jApi
+implementation libs.slf4jReload4j

Review Comment:
   > I think where the dependency becomes part of our API surface (i.e. we 
expect a normal user to bring in additional jars affected by it) we should (and 
have) taken more care with compatibility.
   
   That makes sense to me. @muralibasani could you please file KIP for it? If 
you have no bandwidth, I'm good to take over it 😄 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-16961) TestKRaftUpgrade system tests fail in v3.7.1 RC1

2024-06-18 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez reassigned KAFKA-16961:
---

Assignee: Igor Soarez

> TestKRaftUpgrade system tests fail in v3.7.1 RC1
> 
>
> Key: KAFKA-16961
> URL: https://issues.apache.org/jira/browse/KAFKA-16961
> Project: Kafka
>  Issue Type: Test
>Reporter: Luke Chen
>Assignee: Igor Soarez
>Priority: Blocker
> Fix For: 3.8.0, 3.7.1
>
>
>  
>  
> {code:java}
> 
> SESSION REPORT (ALL TESTS)
> ducktape version: 0.11.4
> session_id:       2024-06-14--003
> run time:         86 minutes 13.705 seconds
> tests run:        24
> passed:           18
> flaky:            0
> failed:           6
> ignored:          0
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 44.680 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 42.627 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.2.3.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 28.205 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.2.3.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 42.388 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.3.2.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 57.679 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.3.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 57.238 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.4.1.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 52.545 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.4.1.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 56.289 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.5.2.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 54.953 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.5.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 59.579 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=dev.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 21.016 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=dev.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 56.175 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_u

Re: [PR] KAFKA-16527; Implement request handling for updated KRaft RPCs [kafka]

2024-06-18 Thread via GitHub


jsancio commented on code in PR #16235:
URL: https://github.com/apache/kafka/pull/16235#discussion_r1644728890


##
clients/src/main/resources/common/message/BeginQuorumEpochResponse.json:
##
@@ -17,25 +17,35 @@
   "apiKey": 53,
   "type": "response",
   "name": "BeginQuorumEpochResponse",
-  "validVersions": "0",
-  "flexibleVersions": "none",
+  "validVersions": "0-1",

Review Comment:
   Fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16527; Implement request handling for updated KRaft RPCs [kafka]

2024-06-18 Thread via GitHub


jsancio commented on code in PR #16235:
URL: https://github.com/apache/kafka/pull/16235#discussion_r1644723447


##
clients/src/main/resources/common/message/BeginQuorumEpochRequest.json:
##
@@ -18,24 +18,37 @@
   "type": "request",
   "listeners": ["controller"],
   "name": "BeginQuorumEpochRequest",
-  "validVersions": "0",
-  "flexibleVersions": "none",
+  "validVersions": "0-1",

Review Comment:
   Done. I just realized that I missed one part of the request handling of this 
RPC.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16527; Implement request handling for updated KRaft RPCs [kafka]

2024-06-18 Thread via GitHub


jsancio commented on code in PR #16235:
URL: https://github.com/apache/kafka/pull/16235#discussion_r1644714171


##
clients/src/main/resources/common/message/FetchResponse.json:
##
@@ -47,7 +47,7 @@
   // Version 15 is the same as version 14 (KIP-903).
   //
   // Version 16 adds the 'NodeEndpoints' field (KIP-951).
-  "validVersions": "0-16",
+  "validVersions": "0-17",

Review Comment:
   Fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16972: Move BrokerTopicMetrics and AggregatedMetric to org.apache.kafka.storage.log.metrics [kafka]

2024-06-18 Thread via GitHub


FrankYang0529 opened a new pull request, #16387:
URL: https://github.com/apache/kafka/pull/16387

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-18 Thread via GitHub


OmniaGM commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1644705014


##
clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java:
##
@@ -891,11 +900,20 @@ private void sendProduceRequest(long now, int 
destination, short acks, int timeo
 // which is supporting the new magic version to one which doesn't, 
then we will need to convert.
 if (!records.hasMatchingMagic(minUsedMagic))
 records = batch.records().downConvert(minUsedMagic, 0, 
time).records();
-ProduceRequestData.TopicProduceData tpData = tpd.find(tp.topic());
+ProduceRequestData.TopicProduceData tpData = canUseTopicId ?
+tpd.find(tp.topic(), topicIds.get(tp.topic())) :
+tpd.find(new 
ProduceRequestData.TopicProduceData().setName(tp.topic()));
+
 if (tpData == null) {
-tpData = new 
ProduceRequestData.TopicProduceData().setName(tp.topic());
+tpData = new ProduceRequestData.TopicProduceData();
 tpd.add(tpData);
 }
+if (canUseTopicId) {

Review Comment:
   good point just updated this



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] MINOR: Add docs for replica.alter.log.dirs.io.max.bytes.per.second co… [kafka]

2024-06-18 Thread via GitHub


mimaison opened a new pull request, #16386:
URL: https://github.com/apache/kafka/pull/16386

   …nfig
   
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16936 : Upgrade slf4j, sys property for provider [kafka]

2024-06-18 Thread via GitHub


gharris1727 commented on code in PR #16324:
URL: https://github.com/apache/kafka/pull/16324#discussion_r1644671854


##
build.gradle:
##
@@ -928,14 +928,15 @@ project(':server') {
 implementation libs.jacksonDatabind
 
 implementation libs.slf4jApi
+implementation libs.slf4jReload4j

Review Comment:
   > "should we guarantee the compatibility of dependency chain in updating 
kafka dependencies?"
   
   I think where the dependency becomes part of our API surface (i.e. we expect 
a normal user to bring in additional jars affected by it) we should (and have) 
taken more care with compatibility.
   
   For another example, this is a KIP dependency upgrade, because that 
dependency forms part of our API: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1032%3A+Upgrade+to+Jakarta+and+JavaEE+9+in+Kafka+4.0



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16772: Introduce kraft.version to support KIP-853 [kafka]

2024-06-18 Thread via GitHub


jsancio commented on PR #16230:
URL: https://github.com/apache/kafka/pull/16230#issuecomment-2176402435

   > We also currently don't propagate control records to the metadata layer, 
so that is something we'd have to consider changing.
   
   We actually do, I changed this in this PR 
https://github.com/apache/kafka/commit/bfe81d622979809325c31d549943c40f6f0f7337#diff-5d07bcc6e077fc85a7f245be27d8ee2d3b7c4656f1e8a8dc3909eb70d3410e4cR150
   
   The controller just skips those control records.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-18 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855973#comment-17855973
 ] 

Justine Olshan commented on KAFKA-16986:


To be clear, this will be a thing that pollutes the log from time to time on 
versions older than 3.5. 

If it is happening not just on upgrade on version 3.5 or higher, please confirm 
and I will look further.

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1, 3.6.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-18 Thread via GitHub


cmccabe commented on PR #16347:
URL: https://github.com/apache/kafka/pull/16347#issuecomment-2176365238

   @dajac Just pushed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka]

2024-06-18 Thread via GitHub


mimaison merged PR #16375:
URL: https://github.com/apache/kafka/pull/16375


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-18 Thread via GitHub


dajac commented on PR #16347:
URL: https://github.com/apache/kafka/pull/16347#issuecomment-2176354721

   @cmccabe I don't see the last changes. Did you push?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16989: Use StringBuilder instead of String concatenation [kafka]

2024-06-18 Thread via GitHub


frankvicky commented on PR #16385:
URL: https://github.com/apache/kafka/pull/16385#issuecomment-2176349274

   Hi @chia7712 , I have make a small change based on feedback, PTAL 😺 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka]

2024-06-18 Thread via GitHub


mimaison commented on PR #16375:
URL: https://github.com/apache/kafka/pull/16375#issuecomment-2176341868

   Since all changes are to `.html` files, I'll not wait for tests to complete. 
Merging to trunk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: consumer log fixes [kafka]

2024-06-18 Thread via GitHub


chia7712 merged PR #16345:
URL: https://github.com/apache/kafka/pull/16345


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16989: Use StringBuilder instead of String concatenation [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on code in PR #16385:
URL: https://github.com/apache/kafka/pull/16385#discussion_r1644605641


##
metadata/src/main/java/org/apache/kafka/image/node/ClientQuotasImageNode.java:
##
@@ -127,20 +127,20 @@ static ClientQuotaEntity decodeEntity(String input) {
 } else {
 char c = input.charAt(i++);
 if (escaping) {
-value += c;
+value.append(c);
 escaping = false;
 } else {
 switch (c) {
 case ')':
-entries.put(type, value);
+entries.put(type, value.toString());
 type = null;
-value = "";
+value.delete(0, value.length());

Review Comment:
   maybe we can renew `StringBuilder`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16989: Use StringBuilder instead of String concatenation [kafka]

2024-06-18 Thread via GitHub


frankvicky opened a new pull request, #16385:
URL: https://github.com/apache/kafka/pull/16385

   Currently, the String concatenation in `ClientQuotasImageNode` might be a 
heavy cost, so it should be replace with `StringBuilder`
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka]

2024-06-18 Thread via GitHub


mimaison commented on code in PR #16375:
URL: https://github.com/apache/kafka/pull/16375#discussion_r1644600311


##
docs/streams/developer-guide/app-reset-tool.html:
##
@@ -78,9 +78,9 @@
 Step 1: Run the application reset tool
 Invoke the application reset tool from the command line
 Warning! This tool makes irreversible changes to your 
application. It is strongly recommended that you run this once with --dry-run to preview 
your changes before making them.
-/bin/kafka-streams-application-reset
+$ 
bin/kafka-streams-application-reset

Review Comment:
   Fixed



##
docs/implementation.html:
##
@@ -75,13 +75,13 @@ 5.3.1.1 Control Batches
 A control batch contains a single record called the control record. 
Control records should not be passed on to applications. Instead, they are used 
by consumers to filter out aborted transactional messages.
  The key of a control record conforms to the following schema: 
-version: int16 
(current version is 0)
+version: int16 (current version is 0)
 type: int16 (0 indicates an abort marker, 1 indicates a commit)
 The schema for the value of a control record is dependent on the type. 
The value is opaque to clients.
 
 5.3.2 Record
Record level headers were introduced in Kafka 0.11.0. The on-disk 
format of a record with Headers is delineated below. 
-   length: varint
+   length: varint

Review Comment:
   Fixed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-16941) Flaky test - testDynamicBrokerConfigUpdateUsingKraft [1] Type=Raft-Combined, MetadataVersion=4.0-IV0,Security=PLAINTEXT – kafka.admin.ConfigCommandIntegrationTest

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16941.

Resolution: Duplicate

the flaky will get fixed by KAFKA-16939

> Flaky test - testDynamicBrokerConfigUpdateUsingKraft [1] Type=Raft-Combined, 
> MetadataVersion=4.0-IV0,Security=PLAINTEXT – 
> kafka.admin.ConfigCommandIntegrationTest
> --
>
> Key: KAFKA-16941
> URL: https://issues.apache.org/jira/browse/KAFKA-16941
> Project: Kafka
>  Issue Type: Test
>Reporter: Igor Soarez
>Priority: Minor
>
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-16077/4/tests/
> {code:java}
> org.opentest4j.AssertionFailedError: Condition not met within timeout 5000. 
> [listener.name.internal.ssl.keystore.location] are not updated ==> expected: 
>  but was: 
>     at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>     at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>     at org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
>     at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
>     at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)
>     at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:396)
>     at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)
>     at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:393)
>     at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377)
>     at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:367)
>     at 
> kafka.admin.ConfigCommandIntegrationTest.verifyConfigDefaultValue(ConfigCommandIntegrationTest.java:519)
>     at 
> kafka.admin.ConfigCommandIntegrationTest.deleteAndVerifyConfig(ConfigCommandIntegrationTest.java:514)
>     at 
> kafka.admin.ConfigCommandIntegrationTest.testDynamicBrokerConfigUpdateUsingKraft(ConfigCommandIntegrationTest.java:237)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16921: Remove junit 4 dependency from connect:runtime module [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on PR #16383:
URL: https://github.com/apache/kafka/pull/16383#issuecomment-2176280860

   @m1a2st could you please remove 'org.apache.kafka.test.IntegrationTest' from 
`build.gradle`?
   
   https://github.com/apache/kafka/blob/trunk/build.gradle#L527
   https://github.com/apache/kafka/blob/trunk/build.gradle#L565
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on code in PR #16375:
URL: https://github.com/apache/kafka/pull/16375#discussion_r1644574034


##
docs/streams/developer-guide/app-reset-tool.html:
##
@@ -78,9 +78,9 @@
 Step 1: Run the application reset tool
 Invoke the application reset tool from the command line
 Warning! This tool makes irreversible changes to your 
application. It is strongly recommended that you run this once with --dry-run to preview 
your changes before making them.
-/bin/kafka-streams-application-reset
+$ 
bin/kafka-streams-application-reset

Review Comment:
   ` `5.3.1.1 Control Batches
 A control batch contains a single record called the control record. 
Control records should not be passed on to applications. Instead, they are used 
by consumers to filter out aborted transactional messages.
  The key of a control record conforms to the following schema: 
-version: int16 
(current version is 0)
+version: int16 (current version is 0)
 type: int16 (0 indicates an abort marker, 1 indicates a commit)
 The schema for the value of a control record is dependent on the type. 
The value is opaque to clients.
 
 5.3.2 Record
Record level headers were introduced in Kafka 0.11.0. The on-disk 
format of a record with Headers is delineated below. 
-   length: varint
+   length: varint

Review Comment:
   `` -> ``



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16967: NioEchoServer fails to register connection and causes flaky failure. [kafka]

2024-06-18 Thread via GitHub


frankvicky opened a new pull request, #16384:
URL: https://github.com/apache/kafka/pull/16384

   The issue involves the `NioEchoServer` crashing when attempting to register 
new connections due to a connection already being registered. This problem is 
specifically in the test 
methods(`testUngracefulRemoteCloseDuringHandshakeRead`, 
`testUngracefulRemoteCloseDuringHandshakeWrite`) handling ungraceful remote 
closes during the handshake. The test fails because the connection becomes 
unexpectedly idle and expires, leading to a timeout.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-16989) Use StringBuilder instead of string concatenation

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-16989:
--

Assignee: TengYao Chi  (was: Chia-Ping Tsai)

> Use StringBuilder instead of string concatenation
> -
>
> Key: KAFKA-16989
> URL: https://issues.apache.org/jira/browse/KAFKA-16989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: TengYao Chi
>Priority: Minor
>
> https://github.com/apache/kafka/blob/2fd00ce53678509c9f2cfedb428e37a871e3d530/metadata/src/main/java/org/apache/kafka/image/node/ClientQuotasImageNode.java#L130
> The string concatenation will create many new strings and we can reduce the 
> cost by using StringBuilder



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-18 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Affects Version/s: 3.6.1

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1, 3.6.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16989) Use StringBuilder instead of string concatenation

2024-06-18 Thread TengYao Chi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855954#comment-17855954
 ] 

TengYao Chi commented on KAFKA-16989:
-

Gentle ping [~chia7712] 
If you are not start working on this one, I would like to handle it. :)

> Use StringBuilder instead of string concatenation
> -
>
> Key: KAFKA-16989
> URL: https://issues.apache.org/jira/browse/KAFKA-16989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> https://github.com/apache/kafka/blob/2fd00ce53678509c9f2cfedb428e37a871e3d530/metadata/src/main/java/org/apache/kafka/image/node/ClientQuotasImageNode.java#L130
> The string concatenation will create many new strings and we can reduce the 
> cost by using StringBuilder



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16989) Use StringBuilder instead of string concatenation

2024-06-18 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16989:
--

 Summary: Use StringBuilder instead of string concatenation
 Key: KAFKA-16989
 URL: https://issues.apache.org/jira/browse/KAFKA-16989
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


https://github.com/apache/kafka/blob/2fd00ce53678509c9f2cfedb428e37a871e3d530/metadata/src/main/java/org/apache/kafka/image/node/ClientQuotasImageNode.java#L130

The string concatenation will create many new strings and we can reduce the 
cost by using StringBuilder



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16952: Do not bump broker epoch when re-registering the same incarnation [kafka]

2024-06-18 Thread via GitHub


cmccabe merged PR #16333:
URL: https://github.com/apache/kafka/pull/16333


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16936 : Upgrade slf4j, sys property for provider [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on code in PR #16324:
URL: https://github.com/apache/kafka/pull/16324#discussion_r1644499138


##
build.gradle:
##
@@ -928,14 +928,15 @@ project(':server') {
 implementation libs.jacksonDatabind
 
 implementation libs.slf4jApi
+implementation libs.slf4jReload4j

Review Comment:
   > it also seems this could break users that provide their own slf4j backend, 
for example someone using slf4j-simple-1.5.5.jar. After tis change they need to 
switch to slf4j-simple-2.0.13.jar.
   
   That is a good point, and it seems to me the true issue is "should we 
guarantee the compatibility of dependency chain in updating kafka dependencies?"
   
   If the answer is yes, it implies that we need KIP for each dependency update 
because it is always possible to break user-provided dependencies.
   
   Or slf4j is a special case that we do care? If so, a workaround is that we 
add dependency of all slf4j backend. If users are using kafka as dependency, it 
makes users' dependency management tool to use the "newer" backend implicitly. 
If users want to run kafka with different provider, they can define system 
variable instead of offering provider jar.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16725: Adjust share group configs to match KIP [kafka]

2024-06-18 Thread via GitHub


omkreddy commented on code in PR #16368:
URL: https://github.com/apache/kafka/pull/16368#discussion_r1644474270


##
server/src/main/java/org/apache/kafka/server/config/ShareGroupConfigs.java:
##
@@ -88,9 +88,9 @@ public class ShareGroupConfigs {
 public static final ConfigDef CONFIG_DEF =  new ConfigDef()
 .defineInternal(SHARE_GROUP_ENABLE_CONFIG, BOOLEAN, 
SHARE_GROUP_ENABLE_DEFAULT, null, MEDIUM, SHARE_GROUP_ENABLE_DOC)
 .define(SHARE_GROUP_DELIVERY_COUNT_LIMIT_CONFIG, INT, 
SHARE_GROUP_DELIVERY_COUNT_LIMIT_DEFAULT, between(2, 10), MEDIUM, 
SHARE_GROUP_DELIVERY_COUNT_LIMIT_DOC)
-.define(SHARE_GROUP_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DOC)
-.define(SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_DOC)
-.define(SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_DOC)
+.define(SHARE_GROUP_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT, between(1000, 6), MEDIUM, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DOC)

Review Comment:
   Ok, in that case we need to remove the validations added here 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L1147
 and update tests in KafkaConfigTest?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-18 Thread via GitHub


cmccabe commented on code in PR #16347:
URL: https://github.com/apache/kafka/pull/16347#discussion_r1644427304


##
server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java:
##
@@ -202,11 +202,20 @@ public enum MetadataVersion {
 // Add new fetch request version for KIP-951
 IBP_3_7_IV4(19, "3.7", "IV4", false),
 
+// New version for the Kafka 3.8.0 release.
+IBP_3_8_IV0(20, "3.8", "IV0", false),
+
+//
+// NOTE: MetadataVersions after this point are unstable and may be changed.
+// If users attempt to use an unstable MetadataVersion, they will get an 
error.
+// Please move this comment when updating the LATEST_PRODUCTION constant.
+//
+
 // Add ELR related supports (KIP-966).
-IBP_3_8_IV0(20, "3.8", "IV0", true),
+IBP_3_9_IV0(21, "3.9", "IV0", true),
 
 // Introduce version 1 of the GroupVersion feature (KIP-848).
-IBP_4_0_IV0(21, "4.0", "IV0", false);
+IBP_4_0_IV0(22, "4.0", "IV0", true);

Review Comment:
   I don't think this needs to be `true` (at the moment, at least: someone may 
add metadata changes in `IBP_4_0_IV0` later). I will change it to `false`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-18 Thread via GitHub


cmccabe commented on code in PR #16347:
URL: https://github.com/apache/kafka/pull/16347#discussion_r1644425919


##
server-common/src/main/java/org/apache/kafka/server/common/GroupVersion.java:
##
@@ -22,7 +22,7 @@
 public enum GroupVersion implements FeatureVersion {
 
 // Version 1 enables the consumer rebalance protocol (KIP-848).
-GV_1(1, MetadataVersion.IBP_4_0_IV0, Collections.emptyMap());
+GV_1(1, MetadataVersion.IBP_3_9_IV0, Collections.emptyMap());

Review Comment:
   @dajac : Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-18 Thread via GitHub


cmccabe commented on PR #16347:
URL: https://github.com/apache/kafka/pull/16347#issuecomment-2176042112

   > @cmccabe : https://github.com/apache/kafka/pull/15673 is fixing a mistake 
that shouldn't be in 3.8.0. We should have bumped up the API version for 
ListOffset, but we didn't. To me, that seems a blocker for 3.8.0, right?
   
   The discussion thread 
[here](https://github.com/apache/kafka/pull/15673/files#r1624734035) seems to 
suggest that #15673 is adding a new feature.: 
   
   > @junrao : To let the AdminClient use this, we need to add a new type of 
OffsetSpec and a way to set oldestAllowedVersion in ListOffsetsRequest.Build to 
version 9, right?
   >@clolov: Yes, correct, but I want to get this PR in before I move to that. 
I do not want to bunch all of these changes in the same PR
   
   Additionally, the PR title is "KAFKA-16480: Bump ListOffsets version, IBP 
version and mark last version of ListOffsets as unstable #15673". But when I 
look at  
[clients/src/main/resources/common/message/ListOffsetsRequest.json](https://github.com/apache/kafka/pull/15673/files#diff-fac7080d67da905a80126d58fc1745c9a1409de7ef7d093c2ac66a888b134633)
 in the PR, I do not see `"latestVersionUnstable": true`.
   
   Maybe someone can explain more about what #15673 does exactly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16725: Adjust share group configs to match KIP [kafka]

2024-06-18 Thread via GitHub


AndrewJSchofield commented on code in PR #16368:
URL: https://github.com/apache/kafka/pull/16368#discussion_r1644402003


##
server/src/main/java/org/apache/kafka/server/config/ShareGroupConfigs.java:
##
@@ -88,9 +88,9 @@ public class ShareGroupConfigs {
 public static final ConfigDef CONFIG_DEF =  new ConfigDef()
 .defineInternal(SHARE_GROUP_ENABLE_CONFIG, BOOLEAN, 
SHARE_GROUP_ENABLE_DEFAULT, null, MEDIUM, SHARE_GROUP_ENABLE_DOC)
 .define(SHARE_GROUP_DELIVERY_COUNT_LIMIT_CONFIG, INT, 
SHARE_GROUP_DELIVERY_COUNT_LIMIT_DEFAULT, between(2, 10), MEDIUM, 
SHARE_GROUP_DELIVERY_COUNT_LIMIT_DOC)
-.define(SHARE_GROUP_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DOC)
-.define(SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_DOC)
-.define(SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_DOC)
+.define(SHARE_GROUP_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT, between(1000, 6), MEDIUM, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DOC)

Review Comment:
   I think they do different things.
   
   `SHARE_GROUP_RECORD_LOCK_DURATION_MS` configures the lock duration for share 
groups which do not have a specific configuration.
   
   `SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS` is the minimum value the broker 
will accept for a group-specific configuration for the lock duration. 
`SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS` is the maximum value the broker will 
accept for a group-specific configuration for the lock duration.
   
   Once we have KAFKA-14511, it will be possible to define the group-specific 
configurations too.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-13679) Superfluous node disconnected log messages

2024-06-18 Thread Nicolas Guyomar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855926#comment-17855926
 ] 

Nicolas Guyomar commented on KAFKA-13679:
-

I'm only now noticing some bugs were created for that log, I submitted recently 
a PR to lower those idle connection disconnect logs at DEBUG with a more 
intuitive message [https://github.com/apache/kafka/pull/16089] 

> Superfluous node disconnected log messages
> --
>
> Key: KAFKA-13679
> URL: https://issues.apache.org/jira/browse/KAFKA-13679
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.1.0
>Reporter: Philip Bourke
>Priority: Minor
>
> In Kafka 3.1 the "{_}Node x disconnected{_}" log message in the 
> {{NetworkClient.java}} class was changed from DEBUG to INFO - 
> [https://github.com/apache/kafka/commit/79d97bd29d059e8ba8ee7726b49d76e03e281059#diff-dcc1af531d191de8da1e23ad6d878a3efc463ba4670dbcf2896295a9dacd1c18R935]
> Now my application logs are full of node disconnected messages and it would 
> indicate that there may be a connectivity problem. However I can see that the 
> logs are getting written every 5 minutes exactly, and it's the AdminClient 
> that is writing the logs.
> {code:bash}
> 2022-02-16 14:45:39,277 [d-60105f051cdb-admin] INFO   
> o.apache.kafka.clients.NetworkClient - [AdminClient 
> clientId=desktop-session-internal-user-streamer-v1-9888ff1d-446e-40cd-88dd-60105f051cdb-admin]
>  Node 1 disconnected.
> {code}
> My guess is that it may be the 
> [connections.max.idle.ms|https://kafka.apache.org/documentation/#adminclientconfigs_connections.max.idle.ms]
>  config setting, and there is in fact no issue with connectivity to the 
> brokers?
> I'm raising this ticket here because the logs are full of these repetitive 
> messages indicating an issue and setting off alarm bells, and also because I 
> did not get a response on the confluent forum or in any slack channels.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14080) too many node disconnected message in kafka-clients

2024-06-18 Thread Nicolas Guyomar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855927#comment-17855927
 ] 

Nicolas Guyomar commented on KAFKA-14080:
-

I'm only now noticing some bugs were created for that log, I submitted recently 
a PR to lower those idle connection disconnect logs at DEBUG with a more 
intuitive message [https://github.com/apache/kafka/pull/16089] 

> too many node disconnected message in kafka-clients
> ---
>
> Key: KAFKA-14080
> URL: https://issues.apache.org/jira/browse/KAFKA-14080
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.1.1
>Reporter: rui
>Priority: Major
>
> when upgrade kafka-clients from 3.0.1 to 3.1.1, there are a lot of "Node 0 
> disconnected" message in networkclient  per day per listener(20-30k)
> think it is introduced by
> [https://github.com/a0x8o/kafka/commit/cf22405663ec7854bde7eaa3f22b9818c276563f]
> questions:
>  # is it normal with so many "Node X disconnected" message at INFO level? my 
> kafka server has any issue?
>  # it mentioned a back-off configuration to reduce the message, but does it 
> work?(still so many messages)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16527; Implement request handling for updated KRaft RPCs [kafka]

2024-06-18 Thread via GitHub


showuon commented on code in PR #16235:
URL: https://github.com/apache/kafka/pull/16235#discussion_r1644362575


##
clients/src/main/resources/common/message/FetchResponse.json:
##
@@ -47,7 +47,7 @@
   // Version 15 is the same as version 14 (KIP-903).
   //
   // Version 16 adds the 'NodeEndpoints' field (KIP-951).
-  "validVersions": "0-16",
+  "validVersions": "0-17",

Review Comment:
   We need to add a comment to mention why we need to bump the version.



##
clients/src/main/resources/common/message/BeginQuorumEpochResponse.json:
##
@@ -17,25 +17,35 @@
   "apiKey": 53,
   "type": "response",
   "name": "BeginQuorumEpochResponse",
-  "validVersions": "0",
-  "flexibleVersions": "none",
+  "validVersions": "0-1",

Review Comment:
   Please add a comment to mention what changed in v.1.



##
clients/src/main/resources/common/message/EndQuorumEpochResponse.json:
##
@@ -17,25 +17,35 @@
   "apiKey": 54,
   "type": "response",
   "name": "EndQuorumEpochResponse",
-  "validVersions": "0",
-  "flexibleVersions": "none",
+  "validVersions": "0-1",

Review Comment:
   Please add a comment to mention what changed in v.1.



##
clients/src/main/resources/common/message/FetchSnapshotResponse.json:
##
@@ -17,7 +17,7 @@
   "apiKey": 59,
   "type": "response",
   "name": "FetchSnapshotResponse",
-  "validVersions": "0",
+  "validVersions": "0-1",

Review Comment:
   Please add a comment to mention what changed in v.1.



##
core/src/main/scala/kafka/server/KafkaServer.scala:
##
@@ -440,6 +441,8 @@ class KafkaServer(
 threadNamePrefix,
 CompletableFuture.completedFuture(quorumVoters),
 QuorumConfig.parseBootstrapServers(config.quorumBootstrapServers),
+// Endpoint information is only needed for controllers (voters). 
ZK brokers can never be controllers
+Endpoints.empty(),

Review Comment:
   Should we make it clear that `ZK brokers can never be [KRaft] controllers`.



##
clients/src/main/resources/common/message/BeginQuorumEpochRequest.json:
##
@@ -18,24 +18,37 @@
   "type": "request",
   "listeners": ["controller"],
   "name": "BeginQuorumEpochRequest",
-  "validVersions": "0",
-  "flexibleVersions": "none",
+  "validVersions": "0-1",

Review Comment:
   Please add a comment to mention what changed in v.1.



##
clients/src/main/resources/common/message/EndQuorumEpochRequest.json:
##
@@ -18,26 +18,41 @@
   "type": "request",
   "listeners": ["controller"],
   "name": "EndQuorumEpochRequest",
-  "validVersions": "0",
-  "flexibleVersions": "none",
+  "validVersions": "0-1",

Review Comment:
   Please add a comment to mention what changed in v.1.



##
clients/src/main/resources/common/message/FetchSnapshotRequest.json:
##
@@ -18,7 +18,7 @@
   "type": "request",
   "listeners": ["controller"],
   "name": "FetchSnapshotRequest",
-  "validVersions": "0",
+  "validVersions": "0-1",

Review Comment:
   Please add a comment to mention what changed in v.1.



##
clients/src/main/resources/common/message/VoteResponse.json:
##
@@ -17,29 +17,37 @@
   "apiKey": 52,
   "type": "response",
   "name": "VoteResponse",
-  "validVersions": "0",
+  "validVersions": "0-1",

Review Comment:
   ditto



##
clients/src/main/resources/common/message/VoteRequest.json:
##
@@ -18,30 +18,36 @@
   "type": "request",
   "listeners": ["controller"],
   "name": "VoteRequest",
-  "validVersions": "0",
+  "validVersions": "0-1",

Review Comment:
   Please add a comment to mention what changed in v.1.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-16988) InsufficientResourcesError in ConnectDistributedTest system test

2024-06-18 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat updated KAFKA-16988:
---
Fix Version/s: 3.8.0
   3.7.1

> InsufficientResourcesError in ConnectDistributedTest system test
> 
>
> Key: KAFKA-16988
> URL: https://issues.apache.org/jira/browse/KAFKA-16988
> Project: Kafka
>  Issue Type: Bug
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> Saw InsufficientResourcesError when running 
> `ConnectDistributedTest#test_exactly_once_source` system test.
>  
> {code:java}
>  name="test_exactly_once_source_clean=False_connect_protocol=compatible_metadata_quorum=ZK_use_new_coordinator=False"
>  classname="kafkatest.tests.connect.connect_distributed_test" 
> time="403.812"> requested: 1. linux nodes available: 0')" 
> type="exception">InsufficientResourcesError('linux nodes requested: 1. linux 
> nodes available: 0') Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File 
> "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
> line 928, in test_exactly_once_source consumer_validator = 
> ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
> consumer_timeout_ms=1000, print_key=True) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
> __init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
>  line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
> num_nodes, cluster_spec, *args, **kwargs) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 107, in __init__ self.allocate_nodes() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) 
> File "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", 
> line 54, in alloc allocated = self.do_alloc(cluster_spec) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/finite_subcluster.py",
>  line 37, in do_alloc good_nodes, bad_nodes = 
> self._available_nodes.remove_spec(cluster_spec) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/node_container.py", 
> line 131, in remove_spec raise InsufficientResourcesError(err) 
> ducktape.cluster.node_container.InsufficientResourcesError: linux nodes 
> requested: 1. linux nodes available: 0 
>  name="test_exactly_once_source_clean=False_connect_protocol=sessioned_metadata_quorum=ZK_use_new_coordinator=False"
>  classname="kafkatest.tests.connect.connect_distributed_test" 
> time="376.160"> requested: 1. linux nodes available: 0')" 
> type="exception">InsufficientResourcesError('linux nodes requested: 1. linux 
> nodes available: 0') Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File 
> "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
> line 928, in test_exactly_once_source consumer_validator = 
> ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
> consumer_timeout_ms=1000, print_key=True) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
> __init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
>  line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
> num_nodes, cluster_spec, *args, **kwargs) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 107, in __init__ self.allocate_nodes() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) 
> File "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", 
> line 54, in alloc 

[jira] [Resolved] (KAFKA-16988) InsufficientResourcesError in ConnectDistributedTest system test

2024-06-18 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat resolved KAFKA-16988.

Resolution: Fixed

> InsufficientResourcesError in ConnectDistributedTest system test
> 
>
> Key: KAFKA-16988
> URL: https://issues.apache.org/jira/browse/KAFKA-16988
> Project: Kafka
>  Issue Type: Bug
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> Saw InsufficientResourcesError when running 
> `ConnectDistributedTest#test_exactly_once_source` system test.
>  
> {code:java}
>  name="test_exactly_once_source_clean=False_connect_protocol=compatible_metadata_quorum=ZK_use_new_coordinator=False"
>  classname="kafkatest.tests.connect.connect_distributed_test" 
> time="403.812"> requested: 1. linux nodes available: 0')" 
> type="exception">InsufficientResourcesError('linux nodes requested: 1. linux 
> nodes available: 0') Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File 
> "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
> line 928, in test_exactly_once_source consumer_validator = 
> ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
> consumer_timeout_ms=1000, print_key=True) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
> __init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
>  line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
> num_nodes, cluster_spec, *args, **kwargs) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 107, in __init__ self.allocate_nodes() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) 
> File "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", 
> line 54, in alloc allocated = self.do_alloc(cluster_spec) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/finite_subcluster.py",
>  line 37, in do_alloc good_nodes, bad_nodes = 
> self._available_nodes.remove_spec(cluster_spec) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/node_container.py", 
> line 131, in remove_spec raise InsufficientResourcesError(err) 
> ducktape.cluster.node_container.InsufficientResourcesError: linux nodes 
> requested: 1. linux nodes available: 0 
>  name="test_exactly_once_source_clean=False_connect_protocol=sessioned_metadata_quorum=ZK_use_new_coordinator=False"
>  classname="kafkatest.tests.connect.connect_distributed_test" 
> time="376.160"> requested: 1. linux nodes available: 0')" 
> type="exception">InsufficientResourcesError('linux nodes requested: 1. linux 
> nodes available: 0') Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File 
> "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
> line 928, in test_exactly_once_source consumer_validator = 
> ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
> consumer_timeout_ms=1000, print_key=True) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
> __init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
>  line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
> num_nodes, cluster_spec, *args, **kwargs) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 107, in __init__ self.allocate_nodes() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) 
> File "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", 
> line 54, in alloc allocated = self.do_alloc(

Re: [PR] KAFKA-16988: add 1 more node for test_exactly_once_source system test [kafka]

2024-06-18 Thread via GitHub


soarez commented on PR #16379:
URL: https://github.com/apache/kafka/pull/16379#issuecomment-2175953794

   Backported to branches 3.8 and 3.7.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16921: Migrate test of connect module to Junit5 (Remove junit 4 dependency) [kafka]

2024-06-18 Thread via GitHub


m1a2st opened a new pull request, #16383:
URL: https://github.com/apache/kafka/pull/16383

   remove junit 4 dependency from connect module, and fix some class which miss
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka]

2024-06-18 Thread via GitHub


showuon commented on PR #16375:
URL: https://github.com/apache/kafka/pull/16375#issuecomment-2175941712

   > If it's a pain to review, I can split the PR.
   
   It's fine. It looks good. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16988: add 1 more node for test_exactly_once_source system test [kafka]

2024-06-18 Thread via GitHub


soarez merged PR #16379:
URL: https://github.com/apache/kafka/pull/16379


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-18 Thread Vinicius Vieira dos Santos (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855773#comment-17855773
 ] 

Vinicius Vieira dos Santos edited comment on KAFKA-16986 at 6/18/24 11:54 AM:
--

[~jolshan] The only problem I currently see is the log, I took a look and 
actually many of our applications log this message and this pollutes the logs 
from time to time, I don't know exactly the process that triggers this log, but 
it is displayed several times during the pod life cycle not only at startup, 
the print I added to the issue shows this, for the same topic and the same 
partition there are several logs at different times in the same pod, without 
restarts or anything like that and I think it's important to emphasize that 
throughout In the life cycle of these applications we only have one producer 
instance that remains the same throughout the life of the pod. I even validated 
the code of our applications to check that there wasn't a situation where the 
producer kept being destroyed and created again.

 

I will leave below the occurrences in the log of the same pod that has been 
operating for 2 days: https://pastecode.io/s/zn1u118d

 


was (Author: JIRAUSER305851):
[~jolshan] The only problem I currently see is the log, I took a look and 
actually many of our applications log this message and this pollutes the logs 
from time to time, I don't know exactly the process that triggers this log, but 
it is displayed several times during the pod life cycle not only at startup, 
the print I added to the issue shows this, for the same topic and the same 
partition there are several logs at different times in the same pod, without 
restarts or anything like that and I think it's important to emphasize that 
throughout In the life cycle of these applications we only have one producer 
instance that remains the same throughout the life of the pod. I even validated 
the code of our applications to check that there wasn't a situation where the 
producer kept being destroyed and created again.

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13898) metrics.recording.level is underdocumented

2024-06-18 Thread Ksolves (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855901#comment-17855901
 ] 

Ksolves commented on KAFKA-13898:
-

I can pick this up and update the document.

> metrics.recording.level is underdocumented
> --
>
> Key: KAFKA-13898
> URL: https://issues.apache.org/jira/browse/KAFKA-13898
> Project: Kafka
>  Issue Type: Improvement
>  Components: docs
>Reporter: Tom Bentley
>Assignee: Ksolves
>Priority: Minor
>  Labels: newbie
>
> metrics.recording.level is only briefly described in the documentation. In 
> particular the recording level associated with each metric is not documented, 
> which makes it difficult to know the effect of changing the level.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-13898) metrics.recording.level is underdocumented

2024-06-18 Thread Ksolves (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ksolves reassigned KAFKA-13898:
---

Assignee: Ksolves  (was: Richard Joerger)

> metrics.recording.level is underdocumented
> --
>
> Key: KAFKA-13898
> URL: https://issues.apache.org/jira/browse/KAFKA-13898
> Project: Kafka
>  Issue Type: Improvement
>  Components: docs
>Reporter: Tom Bentley
>Assignee: Ksolves
>Priority: Minor
>  Labels: newbie
>
> metrics.recording.level is only briefly described in the documentation. In 
> particular the recording level associated with each metric is not documented, 
> which makes it difficult to know the effect of changing the level.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-15623, Migrate test of Stream module to Junit5 (Stream state) [kafka]

2024-06-18 Thread via GitHub


m1a2st commented on PR #16356:
URL: https://github.com/apache/kafka/pull/16356#issuecomment-2175916646

   @chia7712, Thanks for your comments, PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14511: extend AlterIncrementalConfigs API to support group config [kafka]

2024-06-18 Thread via GitHub


AndrewJSchofield commented on code in PR #15067:
URL: https://github.com/apache/kafka/pull/15067#discussion_r1644173569


##
core/src/main/scala/kafka/server/KafkaApis.scala:
##
@@ -3054,6 +3054,8 @@ class KafkaApis(val requestChannel: RequestChannel,
   authHelper.authorize(originalRequest.context, ALTER_CONFIGS, 
CLUSTER, CLUSTER_NAME)
 case ConfigResource.Type.TOPIC =>
   authHelper.authorize(originalRequest.context, ALTER_CONFIGS, TOPIC, 
resource.name)
+case ConfigResource.Type.GROUP =>

Review Comment:
   I think you also need to add ```  case ConfigResource.Type.GROUP => 
Errors.GROUP_AUTHORIZATION_FAILED``` to `configsAuthorizationApiError`.



##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupMetadataManager.java:
##
@@ -355,7 +370,7 @@ GroupMetadataManager build() {
 private final int consumerGroupHeartbeatIntervalMs;

Review Comment:
   The comment should probably read "The default heartbeat interval for 
consumer groups."



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16936 : Upgrade slf4j, sys property for provider [kafka]

2024-06-18 Thread via GitHub


mimaison commented on code in PR #16324:
URL: https://github.com/apache/kafka/pull/16324#discussion_r1644295632


##
build.gradle:
##
@@ -928,14 +928,15 @@ project(':server') {
 implementation libs.jacksonDatabind
 
 implementation libs.slf4jApi
+implementation libs.slf4jReload4j

Review Comment:
   Based on 
https://issues.apache.org/jira/browse/KAFKA-16936?focusedCommentId=17855156&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17855156
 it also seems this could break users that provide their own slf4j backend, for 
example someone using `slf4j-simple-1.5.5.jar`. After tis change they need to 
switch to `slf4j-simple-2.0.13.jar`.
   
   For that reason, we need to undo that change and we can do a small KIP to 
bump slf4j in 4.0.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix doc for zookeeper.ssl.client.enable [kafka]

2024-06-18 Thread via GitHub


jlprat commented on PR #16374:
URL: https://github.com/apache/kafka/pull/16374#issuecomment-2175867344

   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix doc for zookeeper.ssl.client.enable [kafka]

2024-06-18 Thread via GitHub


mimaison commented on PR #16374:
URL: https://github.com/apache/kafka/pull/16374#issuecomment-2175866186

   I cherry-picked it to 3.8: 
https://github.com/apache/kafka/commit/6669c3050ed6f1b4fab63742994ff00ee958c0c9


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16725: Adjust share group configs to match KIP [kafka]

2024-06-18 Thread via GitHub


omkreddy commented on code in PR #16368:
URL: https://github.com/apache/kafka/pull/16368#discussion_r1644289936


##
server/src/main/java/org/apache/kafka/server/config/ShareGroupConfigs.java:
##
@@ -88,9 +88,9 @@ public class ShareGroupConfigs {
 public static final ConfigDef CONFIG_DEF =  new ConfigDef()
 .defineInternal(SHARE_GROUP_ENABLE_CONFIG, BOOLEAN, 
SHARE_GROUP_ENABLE_DEFAULT, null, MEDIUM, SHARE_GROUP_ENABLE_DOC)
 .define(SHARE_GROUP_DELIVERY_COUNT_LIMIT_CONFIG, INT, 
SHARE_GROUP_DELIVERY_COUNT_LIMIT_DEFAULT, between(2, 10), MEDIUM, 
SHARE_GROUP_DELIVERY_COUNT_LIMIT_DOC)
-.define(SHARE_GROUP_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DOC)
-.define(SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_MIN_RECORD_LOCK_DURATION_MS_DOC)
-.define(SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_DEFAULT, atLeast(1), MEDIUM, 
SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_DOC)
+.define(SHARE_GROUP_RECORD_LOCK_DURATION_MS_CONFIG, INT, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT, between(1000, 6), MEDIUM, 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DOC)

Review Comment:
   we are setting the limit  for SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT to 
`between(1000, 6)`, but at the same time the 
SHARE_GROUP_MAX_RECORD_LOCK_DURATION_MS_DEFAULT limits are set to 
`between(3, 360)`. 
   SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT limits wont allow us to change 
beyond 6. I think we can set `atLeast(1)` for 
SHARE_GROUP_RECORD_LOCK_DURATION_MS_DEFAULT and the limits will be taken care 
by min and max configs. 
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix doc for zookeeper.ssl.client.enable [kafka]

2024-06-18 Thread via GitHub


jlprat commented on PR #16374:
URL: https://github.com/apache/kafka/pull/16374#issuecomment-2175862809

   @mimaison will you cherry-pick it to `3.8` or shall I do it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix doc for zookeeper.ssl.client.enable [kafka]

2024-06-18 Thread via GitHub


mimaison merged PR #16374:
URL: https://github.com/apache/kafka/pull/16374


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix doc for zookeeper.ssl.client.enable [kafka]

2024-06-18 Thread via GitHub


mimaison commented on PR #16374:
URL: https://github.com/apache/kafka/pull/16374#issuecomment-2175857110

   None of the test failures are related, merging to trunk


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka]

2024-06-18 Thread via GitHub


mimaison commented on PR #16375:
URL: https://github.com/apache/kafka/pull/16375#issuecomment-2175856402

   If it's a pain to review, I can split the PR. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10787: Apply spotless to `streams:examples` and `streams-scala` [kafka]

2024-06-18 Thread via GitHub


chia7712 merged PR #16378:
URL: https://github.com/apache/kafka/pull/16378


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-16958) add `STRICT_STUBS` to `EndToEndLatencyTest`, `OffsetCommitCallbackInvokerTest`, `ProducerPerformanceTest`, and `TopologyTest`

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16958.

Fix Version/s: 3.9.0
   Resolution: Fixed

> add `STRICT_STUBS` to `EndToEndLatencyTest`, 
> `OffsetCommitCallbackInvokerTest`, `ProducerPerformanceTest`, and 
> `TopologyTest`
> -
>
> Key: KAFKA-16958
> URL: https://issues.apache.org/jira/browse/KAFKA-16958
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: dujian0068
>Priority: Minor
> Fix For: 3.9.0
>
>
> They all need `@MockitoSettings(strictness = Strictness.STRICT_STUBS)`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16958 add STRICT_STUBS to EndToEndLatencyTest, OffsetCommitCallbackInvokerTest, ProducerPerformanceTest, and TopologyTest [kafka]

2024-06-18 Thread via GitHub


chia7712 merged PR #16348:
URL: https://github.com/apache/kafka/pull/16348


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: use 2 logdirs in ZK migration system tests [kafka]

2024-06-18 Thread via GitHub


soarez merged PR #15394:
URL: https://github.com/apache/kafka/pull/15394


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-16547) add test for DescribeConfigsOptions#includeDocumentation

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16547.

Fix Version/s: 3.9.0
   Resolution: Fixed

> add test for DescribeConfigsOptions#includeDocumentation
> 
>
> Key: KAFKA-16547
> URL: https://issues.apache.org/jira/browse/KAFKA-16547
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 3.9.0
>
>
> as title, we have no tests for the query option.
> If the option is configured to false, `ConfigEntry#documentation` should be 
> null. otherwise, it should return the config documention.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16547: Add test for DescribeConfigsOptions#includeDocumentation [kafka]

2024-06-18 Thread via GitHub


chia7712 merged PR #16355:
URL: https://github.com/apache/kafka/pull/16355


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16976) Improve the dynamic config handling for RemoteLogManagerConfig when a broker is restarted.

2024-06-18 Thread Chia Chuan Yu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855873#comment-17855873
 ] 

Chia Chuan Yu commented on KAFKA-16976:
---

Hi,[~satish.duggana] 

Can I have this one please? Thanks!

> Improve the dynamic config handling for RemoteLogManagerConfig when a broker 
> is restarted.
> --
>
> Key: KAFKA-16976
> URL: https://issues.apache.org/jira/browse/KAFKA-16976
> Project: Kafka
>  Issue Type: Task
>Reporter: Satish Duggana
>Priority: Major
> Fix For: 3.9.0
>
>
> This is a followup on the discussion: 
> https://github.com/apache/kafka/pull/16353#pullrequestreview-2121953295



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16957: Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to work with CLASSIC and CONSUMER [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on PR #16370:
URL: https://github.com/apache/kafka/pull/16370#issuecomment-2175788472

   @chiacyu could you please rebase code?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16936 : Upgrade slf4j, sys property for provider [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on code in PR #16324:
URL: https://github.com/apache/kafka/pull/16324#discussion_r1644239189


##
build.gradle:
##
@@ -928,14 +928,15 @@ project(':server') {
 implementation libs.jacksonDatabind
 
 implementation libs.slf4jApi
+implementation libs.slf4jReload4j

Review Comment:
   agree to remove this change, as that is imported by other modules (tools and 
stream-example).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15623 Migrate remaining tests in streams module to JUnit 5 (integration & internals) [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on PR #16360:
URL: https://github.com/apache/kafka/pull/16360#issuecomment-217532

   @brandboat please take a look at failed tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16969: Log error if config conficts with MV [kafka]

2024-06-18 Thread via GitHub


soarez commented on PR #16366:
URL: https://github.com/apache/kafka/pull/16366#issuecomment-2175775274

   Thanks @showuon !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16921: Migrate test of Stream module to Junit5 (Stream state) [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on code in PR #16356:
URL: https://github.com/apache/kafka/pull/16356#discussion_r1644229011


##
streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBVersionedStoreSegmentValueFormatterTest.java:
##
@@ -18,61 +18,49 @@
 
 import static org.hamcrest.CoreMatchers.equalTo;
 import static org.hamcrest.MatcherAssert.assertThat;
-import static org.junit.Assert.assertThrows;
+import static org.junit.jupiter.api.Assertions.assertThrows;
 
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.stream.Stream;
+
 import 
org.apache.kafka.streams.state.internals.RocksDBVersionedStoreSegmentValueFormatter.SegmentValue;
 import 
org.apache.kafka.streams.state.internals.RocksDBVersionedStoreSegmentValueFormatter.SegmentValue.SegmentSearchResult;
-import org.junit.Test;
-import org.junit.experimental.runners.Enclosed;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
+import org.junit.jupiter.params.ParameterizedTest;
+import org.junit.jupiter.params.provider.Arguments;
+import org.junit.jupiter.params.provider.MethodSource;
 
-@RunWith(Enclosed.class)
 public class RocksDBVersionedStoreSegmentValueFormatterTest {
 
 /**
  * Non-exceptional scenarios which are expected to occur during regular 
store operation.
  */
-@RunWith(Parameterized.class)
 public static class ExpectedCasesTest {

Review Comment:
   I means - please do a bit refactor to avoid nested tests



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-16948) Reset tier lag metrics on becoming follower

2024-06-18 Thread Kamal Chandraprakash (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamal Chandraprakash updated KAFKA-16948:
-
Issue Type: Bug  (was: Task)

> Reset tier lag metrics on becoming follower
> ---
>
> Key: KAFKA-16948
> URL: https://issues.apache.org/jira/browse/KAFKA-16948
> Project: Kafka
>  Issue Type: Bug
>Reporter: Kamal Chandraprakash
>Assignee: Kamal Chandraprakash
>Priority: Major
> Fix For: 3.8.0
>
>
> Tier lag metrics such as remoteCopyLagBytes and remoteCopyLagSegments are not 
> cleared sometimes when the node transitions from leader to follower post a 
> rolling restart.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16976) Improve the dynamic config handling for RemoteLogManagerConfig when a broker is restarted.

2024-06-18 Thread Kamal Chandraprakash (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamal Chandraprakash reassigned KAFKA-16976:


Assignee: (was: Kamal Chandraprakash)

> Improve the dynamic config handling for RemoteLogManagerConfig when a broker 
> is restarted.
> --
>
> Key: KAFKA-16976
> URL: https://issues.apache.org/jira/browse/KAFKA-16976
> Project: Kafka
>  Issue Type: Task
>Reporter: Satish Duggana
>Priority: Major
> Fix For: 3.9.0
>
>
> This is a followup on the discussion: 
> https://github.com/apache/kafka/pull/16353#pullrequestreview-2121953295



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16921: Migrate test of connect module to Junit5 (Runtime subpackage) [kafka]

2024-06-18 Thread via GitHub


chia7712 merged PR #16350:
URL: https://github.com/apache/kafka/pull/16350


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16921: Migrate test of Stream module to Junit5 (Stream state) [kafka]

2024-06-18 Thread via GitHub


chia7712 commented on PR #16356:
URL: https://github.com/apache/kafka/pull/16356#issuecomment-2175742237

   @m1a2st Could you please update the jira number? It should be KAFKA-15623, 
right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix doc for zookeeper.ssl.client.enable [kafka]

2024-06-18 Thread via GitHub


showuon commented on PR #16374:
URL: https://github.com/apache/kafka/pull/16374#issuecomment-2175737419

   Looks great! Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka]

2024-06-18 Thread via GitHub


mimaison commented on PR #16375:
URL: https://github.com/apache/kafka/pull/16375#issuecomment-2175737130

   It's not easy since this includes changes all over the website. Like for 
code changes, reviewers should test this change locally. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16921: Migrate test of connect module to Junit5 (Runtime direct) [kafka]

2024-06-18 Thread via GitHub


chia7712 merged PR #16351:
URL: https://github.com/apache/kafka/pull/16351


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix doc for zookeeper.ssl.client.enable [kafka]

2024-06-18 Thread via GitHub


mimaison commented on PR #16374:
URL: https://github.com/apache/kafka/pull/16374#issuecomment-2175725193

   With the fix:
   https://github.com/apache/kafka/assets/903615/875e7c4a-fb60-4ea0-ba4c-8e4f5827655c";>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-18 Thread via GitHub


OmniaGM commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1644115910


##
clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java:
##
@@ -126,15 +134,15 @@ public ProduceRequest(ProduceRequestData 
produceRequestData, short version) {
 }
 
 // visible for testing
-Map partitionSizes() {
+Map partitionSizes() {
 if (partitionSizes == null) {
 // this method may be called by different thread (see the comment 
on data)
 synchronized (this) {
 if (partitionSizes == null) {
-Map tmpPartitionSizes = new 
HashMap<>();
+Map tmpPartitionSizes = new 
HashMap<>();
 data.topicData().forEach(topicData ->
 topicData.partitionData().forEach(partitionData ->
-tmpPartitionSizes.compute(new 
TopicPartition(topicData.name(), partitionData.index()),
+tmpPartitionSizes.compute(new 
TopicIdPartition(topicData.topicId(), partitionData.index(), topicData.name()),

Review Comment:
   In the context of `partitionSizes` method we don't need topic name however, 
   1. I didn't want to have if-else around this to decide if I will populate 
topic name as this logic will handle both old and new version
   2. `TopicIdPartition` contractor need either a topic name to construct 
`TopicPartition` automatically or pass a constructed `TopicPartition` which 
need topic name as well. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-18 Thread via GitHub


dajac commented on code in PR #16347:
URL: https://github.com/apache/kafka/pull/16347#discussion_r1644109817


##
server-common/src/main/java/org/apache/kafka/server/common/GroupVersion.java:
##
@@ -22,7 +22,7 @@
 public enum GroupVersion implements FeatureVersion {
 
 // Version 1 enables the consumer rebalance protocol (KIP-848).
-GV_1(1, MetadataVersion.IBP_4_0_IV0, Collections.emptyMap());
+GV_1(1, MetadataVersion.IBP_3_9_IV0, Collections.emptyMap());

Review Comment:
   We need to revert this change.



##
server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java:
##
@@ -202,11 +202,20 @@ public enum MetadataVersion {
 // Add new fetch request version for KIP-951
 IBP_3_7_IV4(19, "3.7", "IV4", false),
 
+// New version for the Kafka 3.8.0 release.
+IBP_3_8_IV0(20, "3.8", "IV0", false),
+
+//
+// NOTE: MetadataVersions after this point are unstable and may be changed.
+// If users attempt to use an unstable MetadataVersion, they will get an 
error.
+// Please move this comment when updating the LATEST_PRODUCTION constant.
+//
+
 // Add ELR related supports (KIP-966).
-IBP_3_8_IV0(20, "3.8", "IV0", true),
+IBP_3_9_IV0(21, "3.9", "IV0", true),
 
 // Introduce version 1 of the GroupVersion feature (KIP-848).
-IBP_4_0_IV0(21, "4.0", "IV0", false);
+IBP_4_0_IV0(22, "4.0", "IV0", true);

Review Comment:
   For my understanding, what's the reason for using `true` here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-18 Thread via GitHub


OmniaGM commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1644119109


##
clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java:
##
@@ -51,13 +52,20 @@ public static Builder forMagic(byte magic, 
ProduceRequestData data) {
 if (magic < RecordBatch.MAGIC_VALUE_V2) {
 minVersion = 2;
 maxVersion = 2;
+} else if (canNotSupportTopicId(data)) {
+minVersion = 3;
+maxVersion = 11;
 } else {
 minVersion = 3;
 maxVersion = ApiKeys.PRODUCE.latestVersion();
 }
 return new Builder(minVersion, maxVersion, data);
 }
 
+private static boolean canNotSupportTopicId(ProduceRequestData data) {
+return data.topicData().stream().anyMatch(d -> d.topicId() == null || 
d.topicId() == Uuid.ZERO_UUID);

Review Comment:
   just covering our bases, but in this method we will not get null as the 
ProduceRequest constructor has ZERO_UUID as default. Will tide this up 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Don't swallow validateReconfiguration exceptions [kafka]

2024-06-18 Thread via GitHub


rajinisivaram commented on code in PR #16346:
URL: https://github.com/apache/kafka/pull/16346#discussion_r1644085534


##
core/src/main/scala/kafka/server/DynamicBrokerConfig.scala:
##
@@ -640,8 +640,8 @@ class DynamicBrokerConfig(private val kafkaConfig: 
KafkaConfig) extends Logging
   reconfigurable.validateReconfiguration(newConfigs)
 } catch {
   case e: ConfigException => throw e
-  case _: Exception =>
-throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass}")
+  case e: Exception =>
+throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass} due to: $e")

Review Comment:
   We have fixed all known issues with leaked credentials. But we should still 
be careful about including exception strings related to configuration, where 
the exception could be from libraries and we are not sure of what they contain, 
until we have verified that callers have already sanitized these.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-18 Thread via GitHub


OmniaGM commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1644094646


##
clients/src/test/java/org/apache/kafka/common/requests/ProduceRequestTest.java:
##
@@ -120,15 +121,38 @@ public void testBuildWithCurrentMessageFormat() {
 ProduceRequest.Builder requestBuilder = 
ProduceRequest.forMagic(RecordBatch.CURRENT_MAGIC_VALUE,
 new ProduceRequestData()
 .setTopicData(new 
ProduceRequestData.TopicProduceDataCollection(Collections.singletonList(
-new 
ProduceRequestData.TopicProduceData().setName("test").setPartitionData(Collections.singletonList(
-new 
ProduceRequestData.PartitionProduceData().setIndex(9).setRecords(builder.build()
+new 
ProduceRequestData.TopicProduceData()
+.setName("test")

Review Comment:
   No we don't have a case where we will write both. However if we passed both 
the request will drop the name with version >= 12. I'll remove `setName` from 
this test so we don't give the wrong impression that we are expecting both



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: update documentation link to 3.8 [kafka]

2024-06-18 Thread via GitHub


jlprat merged PR #16382:
URL: https://github.com/apache/kafka/pull/16382


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-18 Thread via GitHub


OmniaGM commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1644080966


##
core/src/main/scala/kafka/server/KafkaApis.scala:
##
@@ -608,40 +608,55 @@ class KafkaApis(val requestChannel: RequestChannel,
   }
 }
 
-val unauthorizedTopicResponses = mutable.Map[TopicPartition, 
PartitionResponse]()
-val nonExistingTopicResponses = mutable.Map[TopicPartition, 
PartitionResponse]()
-val invalidRequestResponses = mutable.Map[TopicPartition, 
PartitionResponse]()
-val authorizedRequestInfo = mutable.Map[TopicPartition, MemoryRecords]()
+val unauthorizedTopicResponses = mutable.Map[TopicIdPartition, 
PartitionResponse]()
+val nonExistingTopicResponses = mutable.Map[TopicIdPartition, 
PartitionResponse]()
+val invalidRequestResponses = mutable.Map[TopicIdPartition, 
PartitionResponse]()
+val authorizedRequestInfo = mutable.Map[TopicIdPartition, MemoryRecords]()
+val topicIdToPartitionData = new mutable.ArrayBuffer[(TopicIdPartition, 
ProduceRequestData.PartitionProduceData)]
+
+produceRequest.data.topicData.forEach { topic =>
+  topic.partitionData.forEach { partition =>
+val (topicName, topicId) = if (produceRequest.version() >= 12) {
+  (metadataCache.getTopicName(topic.topicId).getOrElse(topic.name), 
topic.topicId())
+} else {
+  (topic.name(), metadataCache.getTopicId(topic.name()))
+}
+
+val topicPartition = new TopicPartition(topicName, partition.index())
+if (topicName == null || topicName.isEmpty)

Review Comment:
   You right we don't need null as I use 
`metadataCache.getTopicName(topic.topicId).getOrElse(topic.name)` which if 
`metadataCache` doesn't continue topic name the default will be empty string 
which is the value of `topic.name` 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-18 Thread via GitHub


OmniaGM commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1644074097


##
core/src/test/scala/unit/kafka/coordinator/group/GroupCoordinatorTest.scala:
##
@@ -92,6 +92,7 @@ class GroupCoordinatorTest {
   private val protocolSuperset = List((protocolName, metadata), ("roundrobin", 
metadata))
   private val requireStable = true
   private var groupPartitionId: Int = -1
+  val groupMetadataTopicId = Uuid.fromString("JaTH2JYK2ed2GzUapg8tgg")

Review Comment:
   It easier to reason about as we know what to expect, I also saw in other PR 
reviews some feedback suggesting using this instead of random uuid



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: update documentation link to 3.8 [kafka]

2024-06-18 Thread via GitHub


jlprat commented on PR #16382:
URL: https://github.com/apache/kafka/pull/16382#issuecomment-2175509041

   I'm merging this as this doesn't affect CI. Also backporting to 3.8


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16749: Implemented share fetch messages (KIP-932) [kafka]

2024-06-18 Thread via GitHub


AndrewJSchofield commented on code in PR #16377:
URL: https://github.com/apache/kafka/pull/16377#discussion_r164433


##
core/src/main/java/kafka/server/share/SharePartitionManager.java:
##
@@ -317,6 +364,197 @@ public void close() throws Exception {
 // TODO: Provide Implementation
 }
 
+private ShareSessionKey shareSessionKey(String groupId, Uuid memberId) {
+return new ShareSessionKey(groupId, memberId);
+}
+
+private static String partitionsToLogString(Collection 
partitions) {
+return FetchSession.partitionsToLogString(partitions, 
log.isTraceEnabled());
+}
+
+/**
+ * Recursive function to process all the fetch requests present inside the 
fetch queue
+ */
+private void maybeProcessFetchQueue() {
+if (!processFetchQueueLock.compareAndSet(false, true)) {
+// The queue is already being processed hence avoid re-triggering.
+return;
+}
+
+// Initialize the topic partitions for which the fetch should be 
attempted.
+Map topicPartitionData = 
new LinkedHashMap<>();
+ShareFetchPartitionData shareFetchPartitionData = fetchQueue.poll();
+try {
+assert shareFetchPartitionData != null;
+shareFetchPartitionData.topicIdPartitions.forEach(topicIdPartition 
-> {
+SharePartitionKey sharePartitionKey = sharePartitionKey(
+shareFetchPartitionData.groupId,
+topicIdPartition
+);
+SharePartition sharePartition = 
partitionCacheMap.computeIfAbsent(sharePartitionKey,
+k -> new SharePartition(shareFetchPartitionData.groupId, 
topicIdPartition, maxInFlightMessages, maxDeliveryCount,
+recordLockDurationMs, timer, time));
+int partitionMaxBytes = 
shareFetchPartitionData.partitionMaxBytes.getOrDefault(topicIdPartition, 0);
+// Add the share partition to the list of partitions to be 
fetched only if we can
+// acquire the fetch lock on it.
+if (sharePartition.maybeAcquireFetchLock()) {
+// Fetching over a topic should be able to proceed if any 
one of the following 2 conditions are met:
+// 1. The fetch is to happen somewhere in between the 
record states cached in the share partition.
+//This is because in this case we don't need to check 
for the partition limit for in flight messages
+// 2. If condition 1 is not true, then that means we will 
be fetching new records which haven't been cached before.
+//In this case it is necessary to check if the 
partition limit for in flight messages has been reached.
+if (sharePartition.nextFetchOffset() != 
(sharePartition.endOffset() + 1) || sharePartition.canFetchRecords()) {

Review Comment:
   I think this logic would be better encapsulated within the SharePartition 
class.



##
core/src/main/java/kafka/server/share/SharePartitionManager.java:
##
@@ -317,6 +364,197 @@ public void close() throws Exception {
 // TODO: Provide Implementation
 }
 
+private ShareSessionKey shareSessionKey(String groupId, Uuid memberId) {
+return new ShareSessionKey(groupId, memberId);
+}
+
+private static String partitionsToLogString(Collection 
partitions) {
+return FetchSession.partitionsToLogString(partitions, 
log.isTraceEnabled());
+}
+
+/**
+ * Recursive function to process all the fetch requests present inside the 
fetch queue
+ */
+private void maybeProcessFetchQueue() {
+if (!processFetchQueueLock.compareAndSet(false, true)) {
+// The queue is already being processed hence avoid re-triggering.
+return;
+}
+
+// Initialize the topic partitions for which the fetch should be 
attempted.
+Map topicPartitionData = 
new LinkedHashMap<>();
+ShareFetchPartitionData shareFetchPartitionData = fetchQueue.poll();

Review Comment:
   I'm not convinced that this will always return an entry. Adding an entry and 
then calling this method are not synchronized (which is fine) but I expect 
there is a situation in which the added entry has already been processed before 
this method is called.



##
core/src/main/java/kafka/server/share/SharePartitionManager.java:
##
@@ -317,6 +364,197 @@ public void close() throws Exception {
 // TODO: Provide Implementation
 }
 
+private ShareSessionKey shareSessionKey(String groupId, Uuid memberId) {
+return new ShareSessionKey(groupId, memberId);
+}
+
+private static String partitionsToLogString(Collection 
partitions) {
+return FetchSession.partitionsToLogString(partitions, 
log.isTraceEnabled());
+}
+
+/**
+ * Recursive function to process all the fetch requests present inside the

[jira] [Updated] (KAFKA-16978) Apache Kafka 3.7.0 Docker Official Image release

2024-06-18 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16978:
---
Description: 
Steps to release Docker Official Image as post-release process:
 # {*}Prepare Docker Official Image Source{*}: 

Provide the image type and kafka version to {{Docker Prepare Docker Official 
Image Source}} workflow. It will generate a artifact containing the static 
Dockerfile and assets for that specific version. Download the same from the 
workflow.
{code:java}
image_type: jvm
kafka_version: kafka_version_for_docker_official_image{code}
 

      2. *Extract docker official image artifact:*
 * Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.
 * For example,
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact 
 *    Commit these changes to AK trunk.

 

   3. *Remove any versions for which Docker Official Images should not be 
supported IF ANY*
 * If there any versions for which Docker Official Images should not be 
supported, remove the corresponding directories under 
{{{}docker/docker_official_images{}}}.
 * Commit these changes to AK trunk.
 

   4. *Docker Official Image Build and Test*

Provide the image type and kafka version to {{Docker Official Image Build 
Test}} workflow. It will generate a test report and CVE report that can be 
shared with the community, if need be,
{code:java}
For example,
image_type: jvm 
kafka_version: kafka_version_for_docker_official_image{code}
 

  5. *Generate and merge/release the PR for Docker Official Images repo*
 * Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 
 * For example,
python generate_kafka_pr_template.py --image-type=jvm 
 *  Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies the exisiting entry.

  was:
Steps to release Docker Official Image as post-release process:


 # {*}Prepare Docker Official Image Source{*}: 

Provide the image type and kafka version to {{Docker Prepare Docker Official 
Image Source}} workflow. It will generate a artifact containing the static 
Dockerfile and assets for that specific version. Download the same from the 
workflow.
{code:java}
image_type: jvm
kafka_version: kafka_version_for_docker_official_image{code}
 

      2. *Extract docker official image artifact:*
 * Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.
 * For example,
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact 
 *    Commit these changes to AK trunk.

 

   3. *Remove any versions for which Docker Official Images should not be 
supported IF ANY*
 * If there any versions for which Docker Official Images should not be 
supported, remove the corresponding directories under 
{{{}docker/docker_official_images{}}}.
 * Commit these changes to AK trunk.
   4. *Docker Official Image Build and Test*

Provide the image type and kafka version to {{Docker Official Image Build 
Test}} workflow. It will generate a test report and CVE report that can be 
shared with the community, if need be,
{code:java}
For example,
image_type: jvm 
kafka_version: kafka_version_for_docker_official_image{code}
 

  5. *Generate and merge/release the PR for Docker Official Images repo*
 * Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 
 * For example,
python generate_kafka_pr_template.py --image-type=jvm 
 *  Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies the exisiting entry.


> Apache Kafka 3.7.0 Docker Official Image release
> 
>
> Key: KAFKA-16978
> URL: https://issues.apache.org/jira/browse/KAFKA-16978
> Project: Kafka
>  Issue Type: Task
>Reporter: Krish Vora
>Priority: Major
>
> Steps to release Docker Official Image as post-release process:
>  # {*}Prepare Docker Official Image Source{*}: 
> Provide the image type and kafka version to {{Docker Prepare Docker Official 
> Image Source}} workflow. It will generate a artifact containing the static 
> Dockerfile and assets for that specific version. Download the same from the 
> workflow.
> {code:java}
> image_type: jvm
> kafka_version: kafka_version_for_docker_official_

[jira] [Updated] (KAFKA-16978) Apache Kafka 3.7.0 Docker Official Image release

2024-06-18 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16978:
---
Description: 
Steps to release Docker Official Image as post-release process:


 # {*}Prepare Docker Official Image Source{*}: 

Provide the image type and kafka version to {{Docker Prepare Docker Official 
Image Source}} workflow. It will generate a artifact containing the static 
Dockerfile and assets for that specific version. Download the same from the 
workflow.
{code:java}
image_type: jvm
kafka_version: kafka_version_for_docker_official_image{code}
 

      2. *Extract docker official image artifact:*
 * Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.
 * For example,
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact 
 *    Commit these changes to AK trunk.

 

   3. *Remove any versions for which Docker Official Images should not be 
supported IF ANY*
 * If there any versions for which Docker Official Images should not be 
supported, remove the corresponding directories under 
{{{}docker/docker_official_images{}}}.
 * Commit these changes to AK trunk.
   4. *Docker Official Image Build and Test*

Provide the image type and kafka version to {{Docker Official Image Build 
Test}} workflow. It will generate a test report and CVE report that can be 
shared with the community, if need be,
{code:java}
For example,
image_type: jvm 
kafka_version: kafka_version_for_docker_official_image{code}
 

  5. *Generate and merge/release the PR for Docker Official Images repo*
 * Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 
 * For example,
python generate_kafka_pr_template.py --image-type=jvm 
 *  Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies the exisiting entry.

> Apache Kafka 3.7.0 Docker Official Image release
> 
>
> Key: KAFKA-16978
> URL: https://issues.apache.org/jira/browse/KAFKA-16978
> Project: Kafka
>  Issue Type: Task
>Reporter: Krish Vora
>Priority: Major
>
> Steps to release Docker Official Image as post-release process:
>  # {*}Prepare Docker Official Image Source{*}: 
> Provide the image type and kafka version to {{Docker Prepare Docker Official 
> Image Source}} workflow. It will generate a artifact containing the static 
> Dockerfile and assets for that specific version. Download the same from the 
> workflow.
> {code:java}
> image_type: jvm
> kafka_version: kafka_version_for_docker_official_image{code}
>  
>       2. *Extract docker official image artifact:*
>  * Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
> providing it the path to the downloaded artifact. Ensure that this creates a 
> new directory under {{{}docker/docker_official_images/kafka_version{}}}.
>  * For example,
> python extract_docker_official_image_artifact.py 
> --path_to_downloaded_artifact=path/to/downloaded/artifact 
>  *    Commit these changes to AK trunk.
>  
>    3. *Remove any versions for which Docker Official Images should not be 
> supported IF ANY*
>  * If there any versions for which Docker Official Images should not be 
> supported, remove the corresponding directories under 
> {{{}docker/docker_official_images{}}}.
>  * Commit these changes to AK trunk.
>    4. *Docker Official Image Build and Test*
> Provide the image type and kafka version to {{Docker Official Image Build 
> Test}} workflow. It will generate a test report and CVE report that can be 
> shared with the community, if need be,
> {code:java}
> For example,
> image_type: jvm 
> kafka_version: kafka_version_for_docker_official_image{code}
>  
>   5. *Generate and merge/release the PR for Docker Official Images repo*
>  * Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
> providing it the image type. Update the existing entry. 
>  * For example,
> python generate_kafka_pr_template.py --image-type=jvm 
>  *  Copy this to raise a new PR in [Docker Hub's Docker Official 
> Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
>  , which modifies the exisiting entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16978) Apache Kafka 3.7.0 Docker Official Image release

2024-06-18 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16978:
---
Description: 
Steps to release Docker Official Image as post-release process:
 # {*}Prepare Docker Official Image Source{*}: 

Provide the image type and kafka version to {{Docker Prepare Docker Official 
Image Source}} workflow. It will generate a artifact containing the static 
Dockerfile and assets for that specific version. Download the same from the 
workflow.
{code:java}
For example,
image_type: jvm
kafka_version: kafka_version_for_docker_official_image{code}
 

      2. *Extract docker official image artifact:*
 * Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.
 * 
{code:java}
For example,
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact {code}

 *    Commit these changes to AK trunk.

 

   3. *Remove any versions for which Docker Official Images should not be 
supported IF ANY*
 * If there any versions for which Docker Official Images should not be 
supported, remove the corresponding directories under 
{{{}docker/docker_official_images{}}}.
 * Commit these changes to AK trunk.
 

   4. *Docker Official Image Build and Test*

Provide the image type and kafka version to {{Docker Official Image Build 
Test}} workflow. It will generate a test report and CVE report that can be 
shared with the community, if need be,
{code:java}
For example,
image_type: jvm 
kafka_version: kafka_version_for_docker_official_image{code}
 

  5. *Generate and merge/release the PR for Docker Official Images repo*
 * Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 
 * 
{code:java}
For example,
python generate_kafka_pr_template.py --image-type=jvm {code}

 *  Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies the exisiting entry.

  was:
Steps to release Docker Official Image as post-release process:
 # {*}Prepare Docker Official Image Source{*}: 

Provide the image type and kafka version to {{Docker Prepare Docker Official 
Image Source}} workflow. It will generate a artifact containing the static 
Dockerfile and assets for that specific version. Download the same from the 
workflow.
{code:java}
image_type: jvm
kafka_version: kafka_version_for_docker_official_image{code}
 

      2. *Extract docker official image artifact:*
 * Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.
 * For example,
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact 
 *    Commit these changes to AK trunk.

 

   3. *Remove any versions for which Docker Official Images should not be 
supported IF ANY*
 * If there any versions for which Docker Official Images should not be 
supported, remove the corresponding directories under 
{{{}docker/docker_official_images{}}}.
 * Commit these changes to AK trunk.
 

   4. *Docker Official Image Build and Test*

Provide the image type and kafka version to {{Docker Official Image Build 
Test}} workflow. It will generate a test report and CVE report that can be 
shared with the community, if need be,
{code:java}
For example,
image_type: jvm 
kafka_version: kafka_version_for_docker_official_image{code}
 

  5. *Generate and merge/release the PR for Docker Official Images repo*
 * Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 
 * For example,
python generate_kafka_pr_template.py --image-type=jvm 
 *  Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies the exisiting entry.


> Apache Kafka 3.7.0 Docker Official Image release
> 
>
> Key: KAFKA-16978
> URL: https://issues.apache.org/jira/browse/KAFKA-16978
> Project: Kafka
>  Issue Type: Task
>Reporter: Krish Vora
>Priority: Major
>
> Steps to release Docker Official Image as post-release process:
>  # {*}Prepare Docker Official Image Source{*}: 
> Provide the image type and kafka version to {{Docker Prepare Docker Official 
> Image Source}} workflow. It will generate a artifact containing the static 
> Dockerfile and assets for that specific version. Download the same from the 
> workflow.
> {code:java}
> For example,

[jira] [Reopened] (KAFKA-16983) Generate the PR for Docker Official Images repo

2024-06-18 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora reopened KAFKA-16983:


> Generate the PR for Docker Official Images repo
> ---
>
> Key: KAFKA-16983
> URL: https://issues.apache.org/jira/browse/KAFKA-16983
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> # Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
> providing it the image type. Update the existing entry. 
> {code:java}
> python generate_kafka_pr_template.py --image-type=jvm{code}
>  
>       2. Copy this to raise a new PR in [Docker Hub's Docker Official 
> Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
>  , which modifies the exisiting entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16983) Generate and merge/release the PR for Docker Official Images repo

2024-06-18 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16983:
---
Summary: Generate and merge/release the PR for Docker Official Images repo  
(was: Generate the PR for Docker Official Images repo)

> Generate and merge/release the PR for Docker Official Images repo
> -
>
> Key: KAFKA-16983
> URL: https://issues.apache.org/jira/browse/KAFKA-16983
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> # Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
> providing it the image type. Update the existing entry. 
> {code:java}
> python generate_kafka_pr_template.py --image-type=jvm{code}
>  
>       2. Copy this to raise a new PR in [Docker Hub's Docker Official 
> Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
>  , which modifies the exisiting entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] MINOR: update documentation link to 3.8 [kafka]

2024-06-18 Thread via GitHub


jlprat opened a new pull request, #16382:
URL: https://github.com/apache/kafka/pull/16382

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16954: fix consumer close to release assignment in background [kafka]

2024-06-18 Thread via GitHub


jlprat commented on PR #16376:
URL: https://github.com/apache/kafka/pull/16376#issuecomment-2175307992

   Thanks @lucasbru , closed the Jira issue as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



<    1   2