[GitHub] [kafka] kamalcph opened a new pull request, #14297: MINOR: Fix the TBRLMMRestart test.

2023-08-28 Thread via GitHub


kamalcph opened a new pull request, #14297:
URL: https://github.com/apache/kafka/pull/14297

   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kamalcph commented on pull request #14297: MINOR: Fix the TBRLMMRestart test.

2023-08-28 Thread via GitHub


kamalcph commented on PR #14297:
URL: https://github.com/apache/kafka/pull/14297#issuecomment-1695148398

   @abhijeetk88 @satishd @showuon 
   
   This is a regression after #14127 patch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15402) Performance regression on close consumer after upgrading to 3.5.0

2023-08-28 Thread Benoit Delbosc (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759478#comment-17759478
 ] 

Benoit Delbosc commented on KAFKA-15402:


Hello, thank you for taking the time to review this. I can confirm that by 
commenting out the call to {{{}maybeCloseFetchSessions{}}}, the performance 
regression is resolved.

This method introduces an additional round trip to Kafka, the method is taking 
approximately 300ms (Kafka is running locally).

> Performance regression on close consumer after upgrading to 3.5.0
> -
>
> Key: KAFKA-15402
> URL: https://issues.apache.org/jira/browse/KAFKA-15402
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 3.5.0, 3.5.1
>Reporter: Benoit Delbosc
>Priority: Major
> Attachments: image-2023-08-24-18-51-21-720.png, 
> image-2023-08-24-18-51-57-435.png, image-2023-08-25-10-50-28-079.png
>
>
> Hi,
> After upgrading to Kafka client version 3.5.0, we have observed a significant 
> increase in the duration of our Java unit tests. These unit tests heavily 
> rely on the Kafka Admin, Producer, and Consumer API.
> When using Kafka server version 3.4.1, the duration of the unit tests 
> increased from 8 seconds (with Kafka client 3.4.1) to 18 seconds (with Kafka 
> client 3.5.0).
> Upgrading the Kafka server to 3.5.1 show similar results.
> I have come across the issue KAFKA-15178, which could be the culprit. I will 
> attempt to test the proposed patch.
> In the meantime, if you have any ideas that could help identify and address 
> the regression, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15407) Not able to connect to kafka from the Private NLB from outside the VPC account

2023-08-28 Thread Shivakumar (Jira)
Shivakumar created KAFKA-15407:
--

 Summary: Not able to connect to kafka from the Private NLB from 
outside the VPC account 
 Key: KAFKA-15407
 URL: https://issues.apache.org/jira/browse/KAFKA-15407
 Project: Kafka
  Issue Type: Bug
  Components: clients, connect, consumer, producer , protocol
 Environment: Staging, PROD
Reporter: Shivakumar
 Attachments: image-2023-08-28-12-37-33-100.png

!image-2023-08-28-12-37-33-100.png|width=768,height=223!

Problem statement : 
We are trying to connect Kafka from another account/VPC account
Our kafka is in EKS cluster , we have service pointing to these pods for 
connection

We tried to create private link endpoint form Account B to connect to our NLB 
to connect to our Kafka in Account A
We see the connection reset from both client and target(kafka) in the NLB 
monitoring tab of AWS.
We tried various combo of listeners and advertised listeners which did not help 
us.

We are assuming we are missing some combination of Listeners and Network level 
configs with which this connection can be made 
Can you please guide us with this as we are blocked with a major migration. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15407) Not able to connect to kafka from the Private NLB from outside the VPC account

2023-08-28 Thread Shivakumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759487#comment-17759487
 ] 

Shivakumar commented on KAFKA-15407:


[~viktorsomogyi] can you please help us with this issue 

 

> Not able to connect to kafka from the Private NLB from outside the VPC 
> account 
> ---
>
> Key: KAFKA-15407
> URL: https://issues.apache.org/jira/browse/KAFKA-15407
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, connect, consumer, producer , protocol
> Environment: Staging, PROD
>Reporter: Shivakumar
>Priority: Blocker
> Attachments: image-2023-08-28-12-37-33-100.png
>
>
> !image-2023-08-28-12-37-33-100.png|width=768,height=223!
> Problem statement : 
> We are trying to connect Kafka from another account/VPC account
> Our kafka is in EKS cluster , we have service pointing to these pods for 
> connection
> We tried to create private link endpoint form Account B to connect to our NLB 
> to connect to our Kafka in Account A
> We see the connection reset from both client and target(kafka) in the NLB 
> monitoring tab of AWS.
> We tried various combo of listeners and advertised listeners which did not 
> help us.
> We are assuming we are missing some combination of Listeners and Network 
> level configs with which this connection can be made 
> Can you please guide us with this as we are blocked with a major migration. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] vamossagar12 commented on pull request #14264: KAFKA-15403: Refactor @Test(expected) annotation with assertThrows

2023-08-28 Thread via GitHub


vamossagar12 commented on PR #14264:
URL: https://github.com/apache/kafka/pull/14264#issuecomment-1695252460

   Test Failures seem unrelated. cc @cadonna can you plz take a look at this 
PR. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15396) Add a metric indicating the version of the current running kafka server

2023-08-28 Thread hudeqi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759492#comment-17759492
 ] 

hudeqi commented on KAFKA-15396:


thanks, I will submit here.

> Add a metric indicating the version of the current running kafka server
> ---
>
> Key: KAFKA-15396
> URL: https://issues.apache.org/jira/browse/KAFKA-15396
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 3.5.1
>Reporter: hudeqi
>Assignee: hudeqi
>Priority: Major
>  Labels: kip-required
>
> At present, it is impossible to perceive the Kafka version that the broker is 
> running from the perspective of metrics. If multiple Kafka versions are 
> deployed in a cluster due to various reasons, it is difficult for us to 
> intuitively understand the version distribution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15378) Rolling upgrade system tests are failing

2023-08-28 Thread Lucas Brutschy (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759496#comment-17759496
 ] 

Lucas Brutschy commented on KAFKA-15378:


The requests python library downgrade was just required to get the tests 
running, but does not fix the actual test failure

The bouncing upgrade test from 0.10 to 3.6 that seem to fail for you are 
probably yet another problem.

Did you not get the test failures described in the ticket?

> Rolling upgrade system tests are failing
> 
>
> Key: KAFKA-15378
> URL: https://issues.apache.org/jira/browse/KAFKA-15378
> Project: Kafka
>  Issue Type: Task
>  Components: streams, system tests
>Affects Versions: 3.5.1
>Reporter: Lucas Brutschy
>Priority: Major
>
> The system tests are having failures for these tests:
> {noformat}
> kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest.test_rolling_upgrade_with_2_bounces.from_version=3.1.2.to_version=3.6.0-SNAPSHOT
> kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest.test_rolling_upgrade_with_2_bounces.from_version=3.2.3.to_version=3.6.0-SNAPSHOT
> kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest.test_rolling_upgrade_with_2_bounces.from_version=3.3.1.to_version=3.6.0-SNAPSHOT
> {noformat}
> See 
> [https://jenkins.confluent.io/job/system-test-kafka-branch-builder/5801/console]
>  for logs and other test data.
> Note that system tests currently only run with [this 
> fix](https://github.com/apache/kafka/commit/24d1780061a645bb2fbeefd8b8f50123c28ca94e),
>  I think some CVE python library update broke the system tests... 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] divijvaidya merged pull request #14266: KAFKA-15294: Publish remote storage configs

2023-08-28 Thread via GitHub


divijvaidya merged PR #14266:
URL: https://github.com/apache/kafka/pull/14266


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14266: KAFKA-15294: Publish remote storage configs

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14266:
URL: https://github.com/apache/kafka/pull/14266#issuecomment-1695276501

   Test failures are unrelated:
   ```
   [Build / JDK 17 and Scala 2.13 / 
kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize()](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/kafka.server/DynamicBrokerReconfigurationTest/Build___JDK_17_and_Scala_2_13___testThreadPoolResize__/)
   [Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.controller.QuorumControllerTest.testBalancePartitionLeaders()](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/org.apache.kafka.controller/QuorumControllerTest/Build___JDK_17_and_Scala_2_13___testBalancePartitionLeaders__/)
   [Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.tiered.storage.integration.OffloadAndConsumeFromLeaderTest.initializationError](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/org.apache.kafka.tiered.storage.integration/OffloadAndConsumeFromLeaderTest/Build___JDK_17_and_Scala_2_13___initializationError/)
   [Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated()](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/org.apache.kafka.trogdor.coordinator/CoordinatorTest/Build___JDK_17_and_Scala_2_13___testTaskRequestWithOldStartMsGetsUpdated__/)
   [Build / JDK 11 and Scala 2.13 / 
kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize()](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/kafka.server/DynamicBrokerReconfigurationTest/Build___JDK_11_and_Scala_2_13___testThreadPoolResize__/)
   [Build / JDK 11 and Scala 2.13 / 
org.apache.kafka.controller.QuorumControllerTest.testBalancePartitionLeaders()](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/org.apache.kafka.controller/QuorumControllerTest/Build___JDK_11_and_Scala_2_13___testBalancePartitionLeaders__/)
   [Build / JDK 11 and Scala 2.13 / 
org.apache.kafka.tiered.storage.integration.OffloadAndConsumeFromLeaderTest.initializationError](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/org.apache.kafka.tiered.storage.integration/OffloadAndConsumeFromLeaderTest/Build___JDK_11_and_Scala_2_13___initializationError/)
   [Build / JDK 20 and Scala 2.13 / 
kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize()](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/kafka.server/DynamicBrokerReconfigurationTest/Build___JDK_20_and_Scala_2_13___testThreadPoolResize__/)
   [Build / JDK 20 and Scala 2.13 / 
org.apache.kafka.controller.QuorumControllerTest.testBalancePartitionLeaders()](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/org.apache.kafka.controller/QuorumControllerTest/Build___JDK_20_and_Scala_2_13___testBalancePartitionLeaders__/)
   [Build / JDK 20 and Scala 2.13 / 
org.apache.kafka.tiered.storage.integration.OffloadAndConsumeFromLeaderTest.initializationError](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14266/6/testReport/junit/org.apache.kafka.tiered.storage.integration/OffloadAndConsumeFromLeaderTest/Build___JDK_20_and_Scala_2_13___initializationError/)
   ```
   
   Merging to 3.6


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] fvaleri commented on a diff in pull request #13562: KAFKA-14581: Moving GetOffsetShell to tools

2023-08-28 Thread via GitHub


fvaleri commented on code in PR #13562:
URL: https://github.com/apache/kafka/pull/13562#discussion_r1307122178


##
core/src/main/scala/kafka/tools/GetOffsetShell.scala:
##


Review Comment:
   @dengziming are you good with this?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon commented on pull request #14296: KAFKA-15404: Fix the flaky DynamicBrokerReconfiguration test.

2023-08-28 Thread via GitHub


showuon commented on PR #14296:
URL: https://github.com/apache/kafka/pull/14296#issuecomment-1695283054

   Checking


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-15294) Make remote storage related configs as public (i.e. non-internal)

2023-08-28 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya resolved KAFKA-15294.
--
Resolution: Fixed

> Make remote storage related configs as public (i.e. non-internal)
> -
>
> Key: KAFKA-15294
> URL: https://issues.apache.org/jira/browse/KAFKA-15294
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Assignee: Gantigmaa Selenge
>Priority: Blocker
> Fix For: 3.6.0
>
>
> We should publish all the remote storage related configs in v3.6.0. It can be 
> verified by:
>  
> {code:java}
> ./gradlew releaseTarGz
> # The build output is stored in 
> ./core/build/distributions/kafka_2.13-3.x.x-site-docs.tgz. Untar the file 
> verify it{code}
> {{}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15396) Add a metric indicating the version of the current running kafka server

2023-08-28 Thread hudeqi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759514#comment-17759514
 ] 

hudeqi commented on KAFKA-15396:


Hi, [~divijvaidya] [~jolshan] I have added kip-972 link in this jira, thanks.

> Add a metric indicating the version of the current running kafka server
> ---
>
> Key: KAFKA-15396
> URL: https://issues.apache.org/jira/browse/KAFKA-15396
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 3.5.1
>Reporter: hudeqi
>Assignee: hudeqi
>Priority: Major
>  Labels: kip-required
>
> At present, it is impossible to perceive the Kafka version that the broker is 
> running from the perspective of metrics. If multiple Kafka versions are 
> deployed in a cluster due to various reasons, it is difficult for us to 
> intuitively understand the version distribution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] hudeqi commented on pull request #14284: KAFKA-15396:Add a metric indicating the version of the current running kafka server

2023-08-28 Thread via GitHub


hudeqi commented on PR #14284:
URL: https://github.com/apache/kafka/pull/14284#issuecomment-1695315101

   This is KIP-972[https://cwiki.apache.org/confluence/x/M5ezDw]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14288: KAFKA-15256: Adding reviewer as part of release announcement email template

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14288:
URL: https://github.com/apache/kafka/pull/14288#issuecomment-1695326419

   cc: @mimaison since this addresses your comment at 
https://github.com/apache/kafka/pull/14080#pullrequestreview-1545043052
   
   cc: @satishd since you are RM for 3.6, FYI about this new change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15378) Rolling upgrade system tests are failing

2023-08-28 Thread Arpit Goyal (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759522#comment-17759522
 ] 

Arpit Goyal commented on KAFKA-15378:
-

[~lbrutschy]  Not yet , I just tried to reproduce the issue but it is stuck at 
this error , Do you know the reason for this error ?
Could not detect Kafka Streams version 3.6.0-SNAPSHOT on ducker@ducker12

> Rolling upgrade system tests are failing
> 
>
> Key: KAFKA-15378
> URL: https://issues.apache.org/jira/browse/KAFKA-15378
> Project: Kafka
>  Issue Type: Task
>  Components: streams, system tests
>Affects Versions: 3.5.1
>Reporter: Lucas Brutschy
>Priority: Major
>
> The system tests are having failures for these tests:
> {noformat}
> kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest.test_rolling_upgrade_with_2_bounces.from_version=3.1.2.to_version=3.6.0-SNAPSHOT
> kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest.test_rolling_upgrade_with_2_bounces.from_version=3.2.3.to_version=3.6.0-SNAPSHOT
> kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest.test_rolling_upgrade_with_2_bounces.from_version=3.3.1.to_version=3.6.0-SNAPSHOT
> {noformat}
> See 
> [https://jenkins.confluent.io/job/system-test-kafka-branch-builder/5801/console]
>  for logs and other test data.
> Note that system tests currently only run with [this 
> fix](https://github.com/apache/kafka/commit/24d1780061a645bb2fbeefd8b8f50123c28ca94e),
>  I think some CVE python library update broke the system tests... 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] divijvaidya commented on pull request #14288: KAFKA-15256: Adding reviewer as part of release announcement email template

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14288:
URL: https://github.com/apache/kafka/pull/14288#issuecomment-1695348329

   Admittedly, there are some problems with the current approach such as the 
output of the diff between 3.5 and current current is:
   ```
   git shortlog -sn --group=author --group=trailer:co-authored-by 
--group=trailer:Reviewers  --no-merges 3.5..trunk | cut -f2 | sort 
--ignore-case | uniq
   ```
   
   ```
   A. Sophie Blee-Goldman
   Aaron Ai
   Abhijeet Kumar
   aindriu-aiven
   Akhilesh C
   Akhilesh Chaganti
   Alexandre Dupriez
   Alexandre Garnier
   Alok Thatikunta
   Alyssa Huang
   Aman Singh
   Andras Katona
   Andrew Schofield, Greg Harris
   andymg3
   Aneel Kumar
   Anna Sophie Blee-Goldman
   Anton Agestam
   Artem Livshits
   atu-sharm
   bachmanity1
   Bill Bejeck
   Bo Gao
   Bruno Cadonna
   Calvin Liu
   Chaitanya Mukka
   Chase Thomas
   Cheryl Simmons
   Chia-Ping Tsai
   Chris Egerton
   Christo Lolov
   Christo Lolov (@clolov), Bill Bejeck
   Clay Johnson
   Colin P. McCabe
   Colin Patrick McCabe
   Colt McNealy
   d00791190
   Damon Xie
   Danica Fine
   Daniel Scanteianu
   Daniel Urban
   David Arthur
   David Jacot
   David Mao
   dengziming
   Deqi Hu
   Dimitar Dimitrov
   Divij Vaidya
   DL1231
   Dániel Urbán
   Erik van Oosten
   ezio
   Farooq Qaiser
   Federico Valeri
   flashmouse
   Florin Akermann
   Gabriel Oliveira
   Gantigmaa Selenge
   gaurav-narula
   GeunJae Jeon
   Greg Harris
   Guozhang Wang
   Hailey Ni
   Hao Li
   Hector Geraldino
   hudeqi
   hzh0425
   Iblis Lin
   Ismael Juma
   Ivan Yurchenko
   James Shaw
   Jason Gustafson
   Jeff Kim
   Jim Galasyn
   John Roesler
   Joobi S B
   Jorge Esteban Quilcate Otoya
   Josep Prat
   Joseph (Ting-Chou) Lin
   José Armando García Sancio
   Jun Rao
   Justine Olshan
   Kamal Chandraprakash
   Keith Wall
   Kirk True
   Lianet Magrans
   LinShunKang
   lixy
   Lucas Bradstreet
   Lucas Brutschy
   Lucent-Wong
   Lucia Cerchie
   Luke Chen
   Manikumar Reddy
   Manyanda Chitimbo
   Maros Orsak
   Matthew de Detrich
   Matthias J. Sax
   maulin-vasavada
   Max Riedel
   Mehari Beyene
   Michal Cabak (@miccab), John Roesler
   Mickael Maison
   Milind Mantri
   minjian.cai
   mojh7
   Nikolay
   Okada Haruki
   olalamichelle
   Omnia G H Ibrahim
   Omnia G.H Ibrahim
   Owen Leung
   Philip Nee
   prasanthV
   Proven Provenzano
   Purshotam Chauhan
   Qichao Chu
   Qichao Chu (@ex172000), Chris Egerton
   Qichao Chu (@ex172000), Mickael Maison
   Rajini Sivaram
   Randall Hauch
   Reviewers: Victoria Xia
   Ritika Reddy
   Rittika Adhikari
   Ron Dagostino
   Sagar Rao
   Said Boudjelda
   Sambhav Jain
   Satish Duggana
   sciclon2
   Shekhar Rajak
   Sungyun Hur
   Sushant Mahajan
   Tanay Karmarkar
   tison
   Tom Bentley
   vamossagar12
   Victoria Xia
   vveicc
   Walker Carlson
   Yash Mayya
   Yi-Sheng Lien
   Ziming Deng
   蓝士钦 
   ```
   
   Note that there are commits where the "Reviewers" field is not formatted 
properly which leads to entries such as:
   ```
   Christo Lolov (@clolov), Bill Bejeck
   Qichao Chu (@ex172000), Chris Egerton
   Qichao Chu (@ex172000), Mickael Maison
   ```
   I would accept this as a known limitation for now and expect release 
managers to manually fix such inconsistencies.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya merged pull request #14288: KAFKA-15256: Adding reviewer as part of release announcement email template

2023-08-28 Thread via GitHub


divijvaidya merged PR #14288:
URL: https://github.com/apache/kafka/pull/14288


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14288: KAFKA-15256: Adding reviewer as part of release announcement email template

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14288:
URL: https://github.com/apache/kafka/pull/14288#issuecomment-1695351862

   Merging to 3.6 as well


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon opened a new pull request, #14298: MINOR: investigating output threads

2023-08-28 Thread via GitHub


showuon opened a new pull request, #14298:
URL: https://github.com/apache/kafka/pull/14298

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon closed pull request #14298: MINOR: investigating output threads

2023-08-28 Thread via GitHub


showuon closed pull request #14298: MINOR: investigating output threads
URL: https://github.com/apache/kafka/pull/14298


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon opened a new pull request, #14299: [WIP] MINOR: Find threads leak2

2023-08-28 Thread via GitHub


showuon opened a new pull request, #14299:
URL: https://github.com/apache/kafka/pull/14299

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15256) Add code reviewers to contributors list in release email

2023-08-28 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya updated KAFKA-15256:
-
Fix Version/s: 3.6.0

> Add code reviewers to contributors list in release email
> 
>
> Key: KAFKA-15256
> URL: https://issues.apache.org/jira/browse/KAFKA-15256
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Arpit Goyal
>Priority: Minor
> Fix For: 3.6.0
>
>
> Today, we parse the names from commit messages and the authors and co-authors 
> are added as contributors in the release email. We should add reviewers as 
> well. This can be done by parsing the "reviewed by:" field in the commit 
> message.
> Context, see conversation at: https://github.com/apache/kafka/pull/14080



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15352) Ensure consistency while deleting the remote log segments

2023-08-28 Thread Kamal Chandraprakash (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamal Chandraprakash reassigned KAFKA-15352:


Assignee: Kamal Chandraprakash

> Ensure consistency while deleting the remote log segments
> -
>
> Key: KAFKA-15352
> URL: https://issues.apache.org/jira/browse/KAFKA-15352
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Kamal Chandraprakash
>Assignee: Kamal Chandraprakash
>Priority: Major
>
> In Kafka-14888, the remote log segments are deleted which breaches the 
> retention time/size before updating the log-start-offset. In middle of 
> deletion, if the consumer starts to read from the beginning of the topic, 
> then it will fail to read the messages and UNKNOWN_SERVER_ERROR will be 
> thrown back to the consumer.
> To ensure consistency, similar to local log segments where the actual 
> segments are deleted after {{segment.delete.delay.ms}}, we should update the 
> log-start-offset first before deleting the remote log segment.
> See the [PR#13561|https://github.com/apache/kafka/pull/13561] and 
> [comment|https://github.com/apache/kafka/pull/13561#discussion_r1293086543] 
> for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15408) Restart failed tasks in Kafka Connect up to a configurable max-tries

2023-08-28 Thread Patrick Pang (Jira)
Patrick Pang created KAFKA-15408:


 Summary: Restart failed tasks in Kafka Connect up to a 
configurable max-tries
 Key: KAFKA-15408
 URL: https://issues.apache.org/jira/browse/KAFKA-15408
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Patrick Pang


h2. Issue

Currently, Kafka Connect just reports failed tasks on REST API, with the error. 
Users are expected to monitor the status and restart individual connectors if 
there is transient errors. Unfortunately these are common for database 
connectors, e.g. transient connection error, flip of DNS, database downtime, 
etc. Kafka Connect silently failing due to these scenarios would lead to stale 
data downstream.
h2. Proposal

Kafka Connect should be able to restart failed tasks automatically, up to a 
configurable max-tries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] lucasbru opened a new pull request, #14300: MINOR: StoreChangelogReaderTest fails with log-level DEBUG

2023-08-28 Thread via GitHub


lucasbru opened a new pull request, #14300:
URL: https://github.com/apache/kafka/pull/14300

   A mocked method is executed unexpectedly when we enable DEBUG log level, 
leading to confusing test failures during debugging. Since the log message 
itself seems useful, we adapt the test to take the additional mocked method 
call into account.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15408) Restart failed tasks in Kafka Connect up to a configurable max-tries

2023-08-28 Thread Patrick Pang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Pang updated KAFKA-15408:
-
Description: 
h2. Issue

Currently, Kafka Connect just reports failed tasks on REST API, with the error. 
Users are expected to monitor the status and restart individual connectors if 
there is transient errors. Unfortunately these are common for database 
connectors, e.g. transient connection error, flip of DNS, database downtime, 
etc. Kafka Connect silently failing due to these scenarios would lead to stale 
data downstream.
h2. Proposal

Kafka Connect should be able to restart failed tasks automatically, up to a 
configurable max-tries.
h2. Prior arts
 * 
[https://github.com/strimzi/proposals/blob/main/007-restarting-kafka-connect-connectors-and-tasks.md]
 
 * 
[https://docs.aiven.io/docs/products/kafka/kafka-connect/howto/enable-automatic-restart]
 

  was:
h2. Issue

Currently, Kafka Connect just reports failed tasks on REST API, with the error. 
Users are expected to monitor the status and restart individual connectors if 
there is transient errors. Unfortunately these are common for database 
connectors, e.g. transient connection error, flip of DNS, database downtime, 
etc. Kafka Connect silently failing due to these scenarios would lead to stale 
data downstream.
h2. Proposal

Kafka Connect should be able to restart failed tasks automatically, up to a 
configurable max-tries.


> Restart failed tasks in Kafka Connect up to a configurable max-tries
> 
>
> Key: KAFKA-15408
> URL: https://issues.apache.org/jira/browse/KAFKA-15408
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Patrick Pang
>Priority: Major
>
> h2. Issue
> Currently, Kafka Connect just reports failed tasks on REST API, with the 
> error. Users are expected to monitor the status and restart individual 
> connectors if there is transient errors. Unfortunately these are common for 
> database connectors, e.g. transient connection error, flip of DNS, database 
> downtime, etc. Kafka Connect silently failing due to these scenarios would 
> lead to stale data downstream.
> h2. Proposal
> Kafka Connect should be able to restart failed tasks automatically, up to a 
> configurable max-tries.
> h2. Prior arts
>  * 
> [https://github.com/strimzi/proposals/blob/main/007-restarting-kafka-connect-connectors-and-tasks.md]
>  
>  * 
> [https://docs.aiven.io/docs/products/kafka/kafka-connect/howto/enable-automatic-restart]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] divijvaidya commented on pull request #11438: KAFKA-13403 Fix KafkaServer crashes when deleting topics due to the race in log deletion

2023-08-28 Thread via GitHub


divijvaidya commented on PR #11438:
URL: https://github.com/apache/kafka/pull/11438#issuecomment-1695381454

   @arunmathew88, the ball is in my court right now. Please give me a few days 
to review this. There are other places apart from what @ocadaruma mentioned 
such as `StateDirectory.java` which utilize this function. I want to ensure 
that this change in behaviour does not impact the logic at those places.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on a diff in pull request #14222: KAFKA-14133: Move RocksDBRangeIteratorTest, TimestampedKeyValueStoreBuilderTest and TimestampedSegmentTest to Mockito

2023-08-28 Thread via GitHub


divijvaidya commented on code in PR #14222:
URL: https://github.com/apache/kafka/pull/14222#discussion_r1307205079


##
streams/src/test/java/org/apache/kafka/streams/state/internals/TimestampedKeyValueStoreBuilderTest.java:
##
@@ -155,42 +144,24 @@ public void shouldThrowNullPointerIfInnerIsNull() {
 
 @Test
 public void shouldNotThrowNullPointerIfKeySerdeIsNull() {
-reset(supplier);
-expect(supplier.name()).andReturn("name");
-expect(supplier.metricsScope()).andReturn("metricScope").anyTimes();
-replay(supplier);
-
 // does not throw
 new TimestampedKeyValueStoreBuilder<>(supplier, null, Serdes.String(), 
new MockTime());
 }
 
 @Test
 public void shouldNotThrowNullPointerIfValueSerdeIsNull() {
-reset(supplier);
-expect(supplier.name()).andReturn("name");
-expect(supplier.metricsScope()).andReturn("metricScope").anyTimes();
-replay(supplier);
-
 // does not throw
 new TimestampedKeyValueStoreBuilder<>(supplier, Serdes.String(), null, 
new MockTime());
 }
 
 @Test
 public void shouldThrowNullPointerIfTimeIsNull() {
-reset(supplier);
-expect(supplier.name()).andReturn("name");
-expect(supplier.metricsScope()).andReturn("metricScope").anyTimes();
-replay(supplier);
-
 assertThrows(NullPointerException.class, () -> new 
TimestampedKeyValueStoreBuilder<>(supplier, Serdes.String(), Serdes.String(), 
null));
 }
 
 @Test
 public void shouldThrowNullPointerIfMetricsScopeIsNull() {
-reset(supplier);
-expect(supplier.get()).andReturn(new RocksDBTimestampedStore("name", 
null));
-expect(supplier.name()).andReturn("name");

Review Comment:
   makes sense. Thank you.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15408) Restart failed tasks in Kafka Connect up to a configurable max-tries

2023-08-28 Thread Sagar Rao (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759529#comment-17759529
 ] 

Sagar Rao commented on KAFKA-15408:
---

[~patrickpang], thanks for filing this ! IMO, this is a feature which is long 
overdue on the Connect framework. Do you plan to pick this one up? I ask 
because if the answer is yes, we would need a KIP for this considering we might 
change some of the behaviour on how the status end point responses might not 
reflected task failure as soon as a task fails. Also, the configurable 
max-tries means the addition of a new config possibly. 

> Restart failed tasks in Kafka Connect up to a configurable max-tries
> 
>
> Key: KAFKA-15408
> URL: https://issues.apache.org/jira/browse/KAFKA-15408
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Patrick Pang
>Priority: Major
>
> h2. Issue
> Currently, Kafka Connect just reports failed tasks on REST API, with the 
> error. Users are expected to monitor the status and restart individual 
> connectors if there is transient errors. Unfortunately these are common for 
> database connectors, e.g. transient connection error, flip of DNS, database 
> downtime, etc. Kafka Connect silently failing due to these scenarios would 
> lead to stale data downstream.
> h2. Proposal
> Kafka Connect should be able to restart failed tasks automatically, up to a 
> configurable max-tries.
> h2. Prior arts
>  * 
> [https://github.com/strimzi/proposals/blob/main/007-restarting-kafka-connect-connectors-and-tasks.md]
>  
>  * 
> [https://docs.aiven.io/docs/products/kafka/kafka-connect/howto/enable-automatic-restart]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15408) Restart failed tasks in Kafka Connect up to a configurable max-tries

2023-08-28 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao updated KAFKA-15408:
--
Labels: needs-kip  (was: )

> Restart failed tasks in Kafka Connect up to a configurable max-tries
> 
>
> Key: KAFKA-15408
> URL: https://issues.apache.org/jira/browse/KAFKA-15408
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Patrick Pang
>Priority: Major
>  Labels: needs-kip
>
> h2. Issue
> Currently, Kafka Connect just reports failed tasks on REST API, with the 
> error. Users are expected to monitor the status and restart individual 
> connectors if there is transient errors. Unfortunately these are common for 
> database connectors, e.g. transient connection error, flip of DNS, database 
> downtime, etc. Kafka Connect silently failing due to these scenarios would 
> lead to stale data downstream.
> h2. Proposal
> Kafka Connect should be able to restart failed tasks automatically, up to a 
> configurable max-tries.
> h2. Prior arts
>  * 
> [https://github.com/strimzi/proposals/blob/main/007-restarting-kafka-connect-connectors-and-tasks.md]
>  
>  * 
> [https://docs.aiven.io/docs/products/kafka/kafka-connect/howto/enable-automatic-restart]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] showuon commented on pull request #14236: KAFKA-15353: make sure AlterPartitionRequestBuilder.build() is idempotent

2023-08-28 Thread via GitHub


showuon commented on PR #14236:
URL: https://github.com/apache/kafka/pull/14236#issuecomment-169530

   Failed tests are unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon merged pull request #14236: KAFKA-15353: make sure AlterPartitionRequestBuilder.build() is idempotent

2023-08-28 Thread via GitHub


showuon merged PR #14236:
URL: https://github.com/apache/kafka/pull/14236


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon commented on pull request #14236: KAFKA-15353: make sure AlterPartitionRequestBuilder.build() is idempotent

2023-08-28 Thread via GitHub


showuon commented on PR #14236:
URL: https://github.com/apache/kafka/pull/14236#issuecomment-1695405585

   Backported to 3.5 and 3.6.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya merged pull request #14210: KAFKA-14133: Move RecordCollectorTest, StateRestoreCallbackAdapterTest and StoreToProcessorContextAdapterTest to Mockito

2023-08-28 Thread via GitHub


divijvaidya merged PR #14210:
URL: https://github.com/apache/kafka/pull/14210


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15409) Distinguishing controller configs from broker configs in KRaft mode

2023-08-28 Thread Luke Chen (Jira)
Luke Chen created KAFKA-15409:
-

 Summary: Distinguishing controller configs from broker configs in 
KRaft mode
 Key: KAFKA-15409
 URL: https://issues.apache.org/jira/browse/KAFKA-15409
 Project: Kafka
  Issue Type: Improvement
  Components: kraft
Reporter: Luke Chen
Assignee: Luke Chen


In the doc, we category the configs by components. Currently, we have:

{code:java}

3. Configuration
3.1 Broker Configs
3.2 Topic Configs
3.3 Producer Configs
3.4 Consumer Configs
3.5 Kafka Connect Configs
Source Connector Configs
Sink Connector Configs 
3.6 Kafka Streams Configs
3.7 AdminClient Configs
3.8 System Properties 
{code}

In the `3.1 Broker Configs` section, currently it contains:
1. controller role only configs
2. broker role only configs
3. controller and broker both applicable configs

We should have a way to allow users to know which configs are for controller, 
and which are for broker, and which are for both.


Created a 
[wiki|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263427911]
 to list the configs for controller/broker.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15408) Restart failed tasks in Kafka Connect up to a configurable max-tries

2023-08-28 Thread Patrick Pang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759532#comment-17759532
 ] 

Patrick Pang commented on KAFKA-15408:
--

Sure. I can try to come up with something. Can you let me know what's the steps 
to create a KIP. Pretty new to this.

> Restart failed tasks in Kafka Connect up to a configurable max-tries
> 
>
> Key: KAFKA-15408
> URL: https://issues.apache.org/jira/browse/KAFKA-15408
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Patrick Pang
>Priority: Major
>  Labels: needs-kip
>
> h2. Issue
> Currently, Kafka Connect just reports failed tasks on REST API, with the 
> error. Users are expected to monitor the status and restart individual 
> connectors if there is transient errors. Unfortunately these are common for 
> database connectors, e.g. transient connection error, flip of DNS, database 
> downtime, etc. Kafka Connect silently failing due to these scenarios would 
> lead to stale data downstream.
> h2. Proposal
> Kafka Connect should be able to restart failed tasks automatically, up to a 
> configurable max-tries.
> h2. Prior arts
>  * 
> [https://github.com/strimzi/proposals/blob/main/007-restarting-kafka-connect-connectors-and-tasks.md]
>  
>  * 
> [https://docs.aiven.io/docs/products/kafka/kafka-connect/howto/enable-automatic-restart]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] divijvaidya commented on pull request #14294: KRaft support for DescribeClusterRequestTest and DeleteConsumerGroupsTest

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14294:
URL: https://github.com/apache/kafka/pull/14294#issuecomment-1695416860

   cc: @dengziming 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14293: KAFKA-15372: Reconfigure dedicated MM2 connectors after leadership change

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14293:
URL: https://github.com/apache/kafka/pull/14293#issuecomment-1695417548

   cc: @dengziming as you might be interested in reviewing this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14295: Kraft support for Integration Tests

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14295:
URL: https://github.com/apache/kafka/pull/14295#issuecomment-1695418468

   cc: @dengziming as you might be interested in reviewing this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] abhijeetk88 commented on pull request #14112: KAFKA-15261: Do not block replica fetcher if RLMM is not initialized

2023-08-28 Thread via GitHub


abhijeetk88 commented on PR #14112:
URL: https://github.com/apache/kafka/pull/14112#issuecomment-1695420853

   Closing this PR as this is no required anymore. The ReplicaFetcher will no 
longer get blocked because of the changes added in 
https://github.com/apache/kafka/pull/14127


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] abhijeetk88 closed pull request #14112: KAFKA-15261: Do not block replica fetcher if RLMM is not initialized

2023-08-28 Thread via GitHub


abhijeetk88 closed pull request #14112: KAFKA-15261: Do not block replica 
fetcher if RLMM is not initialized
URL: https://github.com/apache/kafka/pull/14112


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14256: [KAFKA-14133] Migrate EasyMock to Mockito in GlobalStateStoreProviderTest, KeyValue…

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14256:
URL: https://github.com/apache/kafka/pull/14256#issuecomment-1695423076

   > It seems that I do not have edit permissions to the ticket
   
   @olalamichelle what is your JIRA Id? Have you created one using 
https://selfserve.apache.org/jira-account.html as mentioned here: 
https://kafka.apache.org/contributing.html


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kamalcph opened a new pull request, #14301: KAFKA-15351: Ensure log-start-offset not updated to local-log-start-offset when remote storage enabled

2023-08-28 Thread via GitHub


kamalcph opened a new pull request, #14301:
URL: https://github.com/apache/kafka/pull/14301

   When tiered storage is enabled on the topic, and the last-standing-replica 
is restarted, then the log-start-offset should not reset its offset to 
first-local-log-segment-base-offset.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14299: [WIP] MINOR: Find threads leak2

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14299:
URL: https://github.com/apache/kafka/pull/14299#issuecomment-1695437219

   Perhaps also try adding `verifyNoUnexpectedThreads` at the start and end of 
test which use:
   ```
   @ExtendWith(value = ClusterTestExtensions.class)
   ```
   
   If those tests are the culprit, they would fail immediately hinting that 
they are leaking tests. I would suggest to start with FeatureCommandUnitTest, 
DeleteRecordsCommandTest and MetadataQuorumCommandTest.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14285: KAFKA-15399: Enable OffloadAndConsumeFromLeader test

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14285:
URL: https://github.com/apache/kafka/pull/14285#issuecomment-1695439979

   @clolov has a national holiday today. Merging this in since we have 2 
approvals already and merging this test is time sensitive due to code freeze. 
We can address his comments once he is back in a separate PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya merged pull request #14285: KAFKA-15399: Enable OffloadAndConsumeFromLeader test

2023-08-28 Thread via GitHub


divijvaidya merged PR #14285:
URL: https://github.com/apache/kafka/pull/14285


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14285: KAFKA-15399: Enable OffloadAndConsumeFromLeader test

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14285:
URL: https://github.com/apache/kafka/pull/14285#issuecomment-1695441799

   Merging into 3.6 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15399) Enable OffloadAndConsumeFromLeader test

2023-08-28 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya updated KAFKA-15399:
-
Fix Version/s: 3.6.0

> Enable OffloadAndConsumeFromLeader test
> ---
>
> Key: KAFKA-15399
> URL: https://issues.apache.org/jira/browse/KAFKA-15399
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Kamal Chandraprakash
>Assignee: Kamal Chandraprakash
>Priority: Major
> Fix For: 3.6.0
>
>
> Build / JDK 17 and Scala 2.13 / initializationError – 
> org.apache.kafka.tiered.storage.integration.OffloadAndConsumeFromLeaderTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15224) Automate version change to snapshot

2023-08-28 Thread Divij Vaidya (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759537#comment-17759537
 ] 

Divij Vaidya commented on KAFKA-15224:
--

https://github.com/apache/kafka/pull/14229/files

> Automate version change to snapshot 
> 
>
> Key: KAFKA-15224
> URL: https://issues.apache.org/jira/browse/KAFKA-15224
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Tanay Karmarkar
>Priority: Minor
>
> We require changing to SNAPSHOT version as part of the release process [1]. 
> The specific manual steps are:
> Update version on the branch to 0.10.0.1-SNAPSHOT in the following places:
>  * 
>  ** docs/js/templateData.js
>  ** gradle.properties
>  ** kafka-merge-pr.py
>  ** streams/quickstart/java/pom.xml
>  ** streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
>  ** streams/quickstart/pom.xml
>  ** tests/kafkatest/_{_}init{_}_.py (note: this version name can't follow the 
> -SNAPSHOT convention due to python version naming restrictions, instead
> update it to 0.10.0.1.dev0)
>  ** tests/kafkatest/version.py
> The diff of the changes look like 
> [https://github.com/apache/kafka/commit/484a86feb562f645bdbec74b18f8a28395a686f7#diff-21a0ab11b8bbdab9930ad18d4bca2d943bbdf40d29d68ab8a96f765bd1f9]
>  
>  
> It would be nice if we could run a script to automatically do it. Note that 
> release.py (line 550) already does something similar where it replaces 
> SNAPSHOT with actual version. We need to do the opposite here. We can 
> repurpose that code in release.py and extract into a new script to perform 
> this opertaion.
> [1] [https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] divijvaidya commented on pull request #14229: KAFKA-15224: automating version change

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14229:
URL: https://github.com/apache/kafka/pull/14229#issuecomment-1695497682

   Thank you for making this change @tanay27. Couple of remaining actions:
   1. Please add a README.md in the release folder mentioning what you added in 
the description here, i.e. to use `pip install -r requirements.txt` to prepare 
the dependencies etc.
   2. Would you suggest using virtualenv in the release folder as well? If yes, 
can you please add that to the README too?
   3. Please move release.py into the release folder as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14229: KAFKA-15224: automating version change

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14229:
URL: https://github.com/apache/kafka/pull/14229#issuecomment-1695531757

   @tanay27 When I run this script, I get the following warnings:
   ```
   python3 version_change.py --version 3.6.2
   WARNING: Couldn't write lextab module . Won't 
overwrite existing lextab module
   WARNING: yacc table file version is out of date
   WARNING: Token 'BLOCK_COMMENT' defined, but not used
   WARNING: Token 'CLASS' defined, but not used
   WARNING: Token 'CONST' defined, but not used
   WARNING: Token 'ENUM' defined, but not used
   WARNING: Token 'EXPORT' defined, but not used
   WARNING: Token 'EXTENDS' defined, but not used
   WARNING: Token 'IMPORT' defined, but not used
   WARNING: Token 'LINE_COMMENT' defined, but not used
   WARNING: Token 'LINE_TERMINATOR' defined, but not used
   WARNING: Token 'SUPER' defined, but not used
   WARNING: There are 10 unused tokens
   WARNING: Couldn't create . Won't 
overwrite existing tabmodule
   ```
   
   Do you know if they are expected? If not, can we fix them?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14229: KAFKA-15224: automating version change

2023-08-28 Thread via GitHub


divijvaidya commented on PR #14229:
URL: https://github.com/apache/kafka/pull/14229#issuecomment-1695535002

   The output of the script has an indentation problem in the following diff, 
it is not having the correct indentation for ""
   ```
   diff --git a/streams/quickstart/java/pom.xml 
b/streams/quickstart/java/pom.xml
   index acd3a1b285..02a1df3f69 100644
   --- a/streams/quickstart/java/pom.xml
   +++ b/streams/quickstart/java/pom.xml
   @@ -26,7 +26,7 @@

org.apache.kafka
streams-quickstart
   -3.5.1
   +3.5.2-SNAPSHOT
..

   ```
   
   and
   ```
   diff --git a/streams/quickstart/java/pom.xml 
b/streams/quickstart/java/pom.xml
   index acd3a1b285..02a1df3f69 100644
   --- a/streams/quickstart/java/pom.xml
   +++ b/streams/quickstart/java/pom.xml
   @@ -26,7 +26,7 @@

org.apache.kafka
streams-quickstart
   -3.5.1
   +3.5.2-SNAPSHOT
..

   ```
   
   similarly, it modified the indentation at whereas it should not modify it
   ```
// Define variables for doc templates
   -var context={
   -"version": "35",
   -"dotVersion": "3.5",
   -"fullDotVersion": "3.5.1",
   -"scalaVersion": "2.13"
   -};
   +var context = {
   +  "version": "35",
   +  "dotVersion": "3.5",
   +  "fullDotVersion": "3.5.2",
   +  "scalaVersion": "2.13"
   +};
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kamalcph commented on pull request #14301: KAFKA-15351: Ensure log-start-offset not updated to local-log-start-offset when remote storage enabled

2023-08-28 Thread via GitHub


kamalcph commented on PR #14301:
URL: https://github.com/apache/kafka/pull/14301#issuecomment-1695539631

   ### Test Report
   
   ```scala
   [SUCCESS] (1) create topic: Topic[name=topicA partition-count=1 
replication-factor=1 segment-size=1 assignment=null 
properties={remote.storage.enable=true, local.retention.bytes=1, 
index.interval.bytes=1, segment.index.bytes=12}]
   
   [SUCCESS] (2) produce-records: topicA-0
   ProducerRecord(topic=topicA, partition=0, headers=RecordHeaders(headers 
= [], isReadOnly = true), key=k1, value=v1, timestamp=null)
   ProducerRecord(topic=topicA, partition=0, headers=RecordHeaders(headers 
= [], isReadOnly = true), key=k2, value=v2, timestamp=null)
   ProducerRecord(topic=topicA, partition=0, headers=RecordHeaders(headers 
= [], isReadOnly = true), key=k3, value=v3, timestamp=null)
   Segment[partition=topicA-0 offloaded-by-broker-id=0 base-offset=0 
record-count=1]
   Segment[partition=topicA-0 offloaded-by-broker-id=0 base-offset=1 
record-count=1]
   
   [SUCCESS] (3) create topic: Topic[name=topicB partition-count=1 
replication-factor=1 segment-size=2 assignment=null 
properties={remote.storage.enable=true, local.retention.bytes=1, 
index.interval.bytes=1, segment.index.bytes=24}]
   
   [SUCCESS] (4) produce-records: topicB-0
   ProducerRecord(topic=topicB, partition=0, headers=RecordHeaders(headers 
= [], isReadOnly = true), key=k1, value=v1, timestamp=null)
   ProducerRecord(topic=topicB, partition=0, headers=RecordHeaders(headers 
= [], isReadOnly = true), key=k2, value=v2, timestamp=null)
   ProducerRecord(topic=topicB, partition=0, headers=RecordHeaders(headers 
= [], isReadOnly = true), key=k3, value=v3, timestamp=null)
   ProducerRecord(topic=topicB, partition=0, headers=RecordHeaders(headers 
= [], isReadOnly = true), key=k4, value=v4, timestamp=null)
   ProducerRecord(topic=topicB, partition=0, headers=RecordHeaders(headers 
= [], isReadOnly = true), key=k5, value=v5, timestamp=null)
   Segment[partition=topicB-0 offloaded-by-broker-id=0 base-offset=0 
record-count=2]
   Segment[partition=topicB-0 offloaded-by-broker-id=0 base-offset=2 
record-count=2]
   
   [SUCCESS] (5) bounce-broker: 0
   
   [SUCCESS] (6) consume-action:
 topic-partition = topicA-0
 fetch-offset = 1
 expected-record-count = 2
 expected-record-from-tiered-storage = 1
   
   [SUCCESS] (7) consume-action:
 topic-partition = topicB-0
 fetch-offset = 1
 expected-record-count = 4
 expected-record-from-tiered-storage = 3
   
   Content of local tiered storage:
   
   Broker IDFile   | Offsets |  Records 
   

   topicB-0| |  
   -ZYHNDiMkQAuKx_Hm7yettA.log |   0 | (k1, v1) 
   |   1 | (k2, v2) 
   | |  
   0002-404zVl_XTuCyOEBrnEVxvQ.log |   2 | (k3, v3) 
   |   3 | (k4, v4) 
   | |  
   topicA-0| |  
   0001-dwVJFEg8SmqzcKuJrD0C8w.log |   1 | (k2, v2) 
   | |  
   -dYoeCs5FSKSEekm7aBq3aQ.log |   0 | (k1, v1) 
   | |  
   ``` 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14954) Use BufferPools to optimize allocation in RemoteLogInputStream

2023-08-28 Thread Arpit Goyal (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759558#comment-17759558
 ] 

Arpit Goyal commented on KAFKA-14954:
-

[~abhijeetkumar]  Can i pick this up or you are already working on this. 

> Use BufferPools to optimize allocation in RemoteLogInputStream
> --
>
> Key: KAFKA-14954
> URL: https://issues.apache.org/jira/browse/KAFKA-14954
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Abhijeet Kumar
>Priority: Minor
>
> ref: https://github.com/apache/kafka/pull/13535#discussion_r1180144730



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15410) Add basic functionality integration test with tiered storage

2023-08-28 Thread Kamal Chandraprakash (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamal Chandraprakash updated KAFKA-15410:
-
Parent: KAFKA-7739
Issue Type: Sub-task  (was: Task)

> Add basic functionality integration test with tiered storage
> 
>
> Key: KAFKA-15410
> URL: https://issues.apache.org/jira/browse/KAFKA-15410
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Kamal Chandraprakash
>Assignee: Kamal Chandraprakash
>Priority: Major
>
> Add the below basic functionality integration tests with tiered storage:
>  # PartitionsExpandTest
>  # DeleteTopicWithSecondaryStorageTest
>  # DeleteSegmentsByRetentionSizeTest
>  # DeleteSegmentsByRetentionTimeTest
>  # DeleteSegmentsDueToLogStartOffsetBreachTest
>  # EnableRemoteLogOnTopicTest
>  # ListOffsetsTest
>  # ReassignReplicaExpandTest
>  # ReassignReplicaMoveTest
>  # ReassignReplicaShrinkTest and
>  # TransactionsTestWithTieredStore



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15410) Add basic functionality integration test with tiered storage

2023-08-28 Thread Kamal Chandraprakash (Jira)
Kamal Chandraprakash created KAFKA-15410:


 Summary: Add basic functionality integration test with tiered 
storage
 Key: KAFKA-15410
 URL: https://issues.apache.org/jira/browse/KAFKA-15410
 Project: Kafka
  Issue Type: Task
Reporter: Kamal Chandraprakash
Assignee: Kamal Chandraprakash


Add the below basic functionality integration tests with tiered storage:
 # PartitionsExpandTest
 # DeleteTopicWithSecondaryStorageTest
 # DeleteSegmentsByRetentionSizeTest
 # DeleteSegmentsByRetentionTimeTest
 # DeleteSegmentsDueToLogStartOffsetBreachTest
 # EnableRemoteLogOnTopicTest
 # ListOffsetsTest
 # ReassignReplicaExpandTest
 # ReassignReplicaMoveTest
 # ReassignReplicaShrinkTest and
 # TransactionsTestWithTieredStore



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] satishd commented on a diff in pull request #14301: KAFKA-15351: Ensure log-start-offset not updated to local-log-start-offset when remote storage enabled

2023-08-28 Thread via GitHub


satishd commented on code in PR #14301:
URL: https://github.com/apache/kafka/pull/14301#discussion_r1307431851


##
storage/src/test/java/org/apache/kafka/tiered/storage/integration/OffloadAndConsumeFromLeaderTest.java:
##
@@ -127,7 +127,7 @@ protected void 
writeTestSpecifications(TieredStorageTestBuilder builder) {
  *   - For topic B, only one segment is present in the 
tiered storage, as asserted by the
  * previous sub-test-case.
  */
-// .bounce(broker)
+.bounce(broker)

Review Comment:
   Can you remove prefix (A) in the docs of this class. 



##
storage/src/test/java/org/apache/kafka/tiered/storage/integration/OffloadAndConsumeFromLeaderTest.java:
##
@@ -127,7 +127,7 @@ protected void 
writeTestSpecifications(TieredStorageTestBuilder builder) {
  *   - For topic B, only one segment is present in the 
tiered storage, as asserted by the
  * previous sub-test-case.
  */
-// .bounce(broker)
+.bounce(broker)

Review Comment:
   Can you remove prefix (A) in the docs of this class. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] satishd commented on a diff in pull request #14301: KAFKA-15351: Ensure log-start-offset not updated to local-log-start-offset when remote storage enabled

2023-08-28 Thread via GitHub


satishd commented on code in PR #14301:
URL: https://github.com/apache/kafka/pull/14301#discussion_r1307433933


##
storage/src/test/java/org/apache/kafka/tiered/storage/integration/OffloadAndConsumeFromLeaderTest.java:
##
@@ -127,7 +127,7 @@ protected void 
writeTestSpecifications(TieredStorageTestBuilder builder) {
  *   - For topic B, only one segment is present in the 
tiered storage, as asserted by the
  * previous sub-test-case.
  */
-// .bounce(broker)
+.bounce(broker)

Review Comment:
   Please remove (A) from test documentation in this class. 
   For ex:
   ```
   Test Cases (A):
   ...
   (A.1)
   
   (A.2)
   
   (A.3) Stops and restarts the broker.
   ...
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15411) DelegationTokenEndToEndAuthorizationWithOwnerTest is Flaky

2023-08-28 Thread Proven Provenzano (Jira)
Proven Provenzano created KAFKA-15411:
-

 Summary: DelegationTokenEndToEndAuthorizationWithOwnerTest is 
Flaky 
 Key: KAFKA-15411
 URL: https://issues.apache.org/jira/browse/KAFKA-15411
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Reporter: Proven Provenzano
Assignee: Proven Provenzano
 Fix For: 3.6.0


DelegationTokenEndToEndAuthorizationWithOwnerTest has become flaky since the 
merge of delegation token support for KRaft.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] pprovenzano commented on pull request #14083: KAFKA-15219: KRaft support for DelegationTokens

2023-08-28 Thread via GitHub


pprovenzano commented on PR #14083:
URL: https://github.com/apache/kafka/pull/14083#issuecomment-1695736206

   > DelegationTokenEndToEndAuthorizationWithOwnerTest
   
   I created https://issues.apache.org/jira/browse/KAFKA-15411 and will start 
looking at it today.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15353) Empty ISR returned from controller after AlterPartition request

2023-08-28 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison updated KAFKA-15353:
---
Affects Version/s: 3.5.1

> Empty ISR returned from controller after AlterPartition request
> ---
>
> Key: KAFKA-15353
> URL: https://issues.apache.org/jira/browse/KAFKA-15353
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.5.0, 3.5.1
>Reporter: Luke Chen
>Assignee: Calvin Liu
>Priority: Blocker
> Fix For: 3.6.0, 3.5.2
>
>
> In 
> [KIP-903|https://cwiki.apache.org/confluence/display/KAFKA/KIP-903%3A+Replicas+with+stale+broker+epoch+should+not+be+allowed+to+join+the+ISR],
>  (more specifically this [PR|https://github.com/apache/kafka/pull/13408]), we 
> bumped the AlterPartitionRequest version to 3 to use `NewIsrWithEpochs` field 
> instead of `NewIsr` one. And when building the request for older version, 
> we'll manually convert/downgrade the request into the older version for 
> backward compatibility 
> [here|https://github.com/apache/kafka/blob/6bd17419b76f8cf8d7e4a11c071494dfaa72cd50/clients/src/main/java/org/apache/kafka/common/requests/AlterPartitionRequest.java#L85-L96],
>  to extract ISR info from `NewIsrWithEpochs` and then fill in the `NewIsr` 
> field, and then clear the `NewIsrWithEpochs` field.
>  
> The problem is, when the AlterPartitionRequest sent out for the first time, 
> if there's some transient error (ex: NOT_CONTROLLER), we'll retry. On the 
> retry, we'll build the AlterPartitionRequest again. But this time, the 
> request data is the one that already converted above. At this point, when we 
> try to extract the ISR from `NewIsrWithEpochs`, we'll get empty. So, we'll 
> send out an AlterPartition request with empty ISR, and impacting the kafka 
> availability.
>  
> From the log, I can see this:
> {code:java}
> [2023-08-16 03:57:55,122] INFO [Partition test_topic-1 broker=3] ISR updated 
> to  (under-min-isr) and version updated to 9 (kafka.cluster.Partition)
> ...
> [2023-08-16 03:57:55,157] ERROR [ReplicaManager broker=3] Error processing 
> append operation on partition test_topic-1 
> (kafka.server.ReplicaManager)org.apache.kafka.common.errors.NotEnoughReplicasException:
>  The size of the current ISR Set() is insufficient to satisfy the min.isr 
> requirement of 2 for partition test_topic-1 {code}
>  
> h4. *Impact:*
> This will happen when users trying to upgrade from versions < 3.5.0 to 3.5.0 
> or later. During the rolling upgrade, there will be some nodes in v3.5.0, and 
> some are not. So, for the node in v3.5.0 will try to build an old version of 
> AlterPartitionRequest. And then, if it happen to have some transient error 
> during the AlterPartitionRequest send, the ISR will be empty and no producers 
> will be able to write data to the partitions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] dajac merged pull request #14120: KAFKA-14499: [4/N] Implement OffsetFetch API

2023-08-28 Thread via GitHub


dajac merged PR #14120:
URL: https://github.com/apache/kafka/pull/14120


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14271: KAFKA-14503: Implement ListGroups

2023-08-28 Thread via GitHub


dajac commented on code in PR #14271:
URL: https://github.com/apache/kafka/pull/14271#discussion_r1307482348


##
clients/src/main/resources/common/message/ListGroupsRequest.json:
##
@@ -23,11 +23,15 @@
   // Version 3 is the first flexible version.
   //
   // Version 4 adds the StatesFilter field (KIP-518).
-  "validVersions": "0-4",
+  //
+  // Version 5 adds the TypesFilter field (KIP-848).
+  "validVersions": "0-5",
   "flexibleVersions": "3+",
   "fields": [
 { "name": "StatesFilter", "type": "[]string", "versions": "4+",
   "about": "The states of the groups we want to list. If empty all groups 
are returned with their state."
-}
+},
+{ "name": "TypesFilter", "type": "[]string", "versions": "5+",
+  "about": "The types of the groups we want to list. If empty all groups 
are returned" }

Review Comment:
   If the tests are related to the new filter, they should go to the next PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14271: KAFKA-14503: Implement ListGroups

2023-08-28 Thread via GitHub


dajac commented on code in PR #14271:
URL: https://github.com/apache/kafka/pull/14271#discussion_r1307483188


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupCoordinatorService.java:
##
@@ -426,9 +429,43 @@ public CompletableFuture 
listGroups(
 return 
FutureUtils.failedFuture(Errors.COORDINATOR_NOT_AVAILABLE.exception());
 }
 
-return FutureUtils.failedFuture(Errors.UNSUPPORTED_VERSION.exception(
-"This API is not implemented yet."
-));
+List> futures = new 
java.util.ArrayList<>(Collections.emptyList());
+for (int i = 0; i < numPartitions; i++) {
+futures.add(runtime.scheduleReadOperation("list_groups",
+new TopicPartition(Topic.GROUP_METADATA_TOPIC_NAME, i),
+(coordinator, __) -> coordinator.listGroups(context, 
request)

Review Comment:
   Right. Then you must pass `lastCommittedOffset` to `listGroups` as well and 
use it to query the timeline data structures.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14271: KAFKA-14503: Implement ListGroups

2023-08-28 Thread via GitHub


dajac commented on code in PR #14271:
URL: https://github.com/apache/kafka/pull/14271#discussion_r1307484913


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupCoordinatorService.java:
##
@@ -426,9 +429,43 @@ public CompletableFuture 
listGroups(
 return 
FutureUtils.failedFuture(Errors.COORDINATOR_NOT_AVAILABLE.exception());
 }
 
-return FutureUtils.failedFuture(Errors.UNSUPPORTED_VERSION.exception(
-"This API is not implemented yet."
-));
+List> futures = new 
java.util.ArrayList<>(Collections.emptyList());
+for (int i = 0; i < numPartitions; i++) {
+futures.add(runtime.scheduleReadOperation("list_groups",
+new TopicPartition(Topic.GROUP_METADATA_TOPIC_NAME, i),
+(coordinator, __) -> coordinator.listGroups(context, 
request)

Review Comment:
   Basically 
[here](https://github.com/apache/kafka/pull/14271/files#diff-00f0f81cf13e66781777d94f7d2e68a581663385c37e98792507f2294c91bb09R418),
 you need to use `lastCommittedOffset` to ensure that you only read committed 
state.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14271: KAFKA-14503: Implement ListGroups

2023-08-28 Thread via GitHub


dajac commented on code in PR #14271:
URL: https://github.com/apache/kafka/pull/14271#discussion_r1307484913


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupCoordinatorService.java:
##
@@ -426,9 +429,43 @@ public CompletableFuture 
listGroups(
 return 
FutureUtils.failedFuture(Errors.COORDINATOR_NOT_AVAILABLE.exception());
 }
 
-return FutureUtils.failedFuture(Errors.UNSUPPORTED_VERSION.exception(
-"This API is not implemented yet."
-));
+List> futures = new 
java.util.ArrayList<>(Collections.emptyList());
+for (int i = 0; i < numPartitions; i++) {
+futures.add(runtime.scheduleReadOperation("list_groups",
+new TopicPartition(Topic.GROUP_METADATA_TOPIC_NAME, i),
+(coordinator, __) -> coordinator.listGroups(context, 
request)

Review Comment:
   Basically 
[here](https://github.com/apache/kafka/pull/14271/files#diff-00f0f81cf13e66781777d94f7d2e68a581663385c37e98792507f2294c91bb09R418),
 you need to use `lastCommittedOffset` to ensure that you only read committed 
state. I don't know if we support the Stream API but the other APIs should 
accept an argument called `epoch`. You can use `lastCommittedOffset` as the 
`epoch.`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15412) Reading an unknown version of quorum-state-file should trigger an error

2023-08-28 Thread John Mannooparambil (Jira)
John Mannooparambil created KAFKA-15412:
---

 Summary: Reading an unknown version of quorum-state-file should 
trigger an error
 Key: KAFKA-15412
 URL: https://issues.apache.org/jira/browse/KAFKA-15412
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Reporter: John Mannooparambil


Reading an unknown version of quorum-state-file should trigger an error. 
Currently the only known version is 0. Reading any other version should cause 
an error. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15413) kafka-server-stop fails with COLUMNS environment variable on Ubuntu

2023-08-28 Thread Takashi Sakai (Jira)
Takashi Sakai created KAFKA-15413:
-

 Summary: kafka-server-stop fails with COLUMNS environment variable 
on Ubuntu
 Key: KAFKA-15413
 URL: https://issues.apache.org/jira/browse/KAFKA-15413
 Project: Kafka
  Issue Type: Bug
  Components: tools
 Environment: kafka: 3.5.1
Java: openjdk version "20.0.1" 2023-04-18
OS: Ubuntu 22.04.3 LTS on WSL2/Windows 11
Reporter: Takashi Sakai


{{kafka-server-stop}} script does not work if environment variable {{COLUMNS}} 
is set on Ubuntu.

{*}Steps to reproduce{*}:
kafka/zookeeper.properties
{noformat}
dataDir=/tmp/kafka-test-20230828-15217-1lop1tk/zookeeper
clientPort=34461
maxClientCnxns=0
admin.enableServer=false
{noformat}
kafka/server.properties
{noformat}
broker.id=0
listeners=PLAINTEXT://:46161
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-test-20230828-15217-1lop1tk/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.retention.check.interval.ms=30
zookeeper.connect=localhost:34461
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
{noformat}
{noformat}
$ zookeeper-server-start kafka/zookeeper.properties >/dev/null 2>&1 &
[1] 18593
$ kafka-server-start kafka/server.properties >/dev/null 2>&1 &
[2] 18982
$ COLUMNS=10 kafka-server-stop # This is unexpected
No kafka server to stop
$ kafka-server-stop
$ zookeeper-server-stop
[2]+  Exit 143                kafka-server-start kafka/server.properties
$ 
[1]+  Exit 143                zookeeper-server-start kafka/zookeeper.properties 
{noformat}
In the third command, I specified {{COLUMNS}} environment variable. It caused 
{{kafka-server-stop}} script to fail finding kafka process.

*Cause*

{{kafka-server-stop}} script uses {{ps ax}} to find kafka process.
{noformat}
OSNAME=$(uname -s)
if [[ "$OSNAME" == "OS/390" ]]; then
(snip)
elif [[ "$OSNAME" == "OS400" ]]; then
(snip)
else
PIDS=$(ps ax | grep ' kafka\.Kafka ' | grep java | grep -v grep | awk 
'{print $1}')
fi
{noformat}
On Ubuntu, {{ps ax}} truncates its output if environment variable {{COLUMNS}} 
exists.

([source code of ps command|#L226-L230]] shows that COLUMNS environment 
variable wins result of {{{}isatty{}}})
{noformat}
$ ps ax | cat
  19912 pts/0Sl 0:03 
/home/linuxbrew/.linuxbrew/opt/openjdk/libexec/bin/java -Xmx1G -Xms1G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true 
-Xlog:gc*:file=/home/linuxbrew/.linuxbrew/Cellar/kafka/3.5.1/libexec/bin/../logs/kafkaServer-gc.log:time,tags:filecount=10,filesize=100M
 -Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dkafka.logs.dir=/home/linuxbrew/.linuxbrew/Cellar/kafka/3.5.1/libexec/bin/../logs
 
-Dlog4j.configuration=file:/home/linuxbrew/.linuxbrew/Cellar/kafka/3.5.1/libexec/bin/../config/log4j.properties
 -cp 
/home/linuxbrew/.linuxbrew/Cellar/kafka/3.5.1/libexec/bin/../libs/activation-1.1.1.jar:(snip):/home/linuxbrew/.linuxbrew/Cellar/kafka/3.5.1/libexec/bin/../libs/zstd-jni-1.5.5-1.jar
 kafka.Kafka kafka/server.properties
$ COLUMNS=10 ps ax | cat
  19912 pts/0Sl 0:05 /home/linux
{noformat}
I tested this on WSL2 on Windows and openjdk installed with Homebrew, but it 
should occur on any environment with {{{}procps-ng{}}}.

*Problem*

This caused CI failure in Homebrew project. 
([GitHub/Homebrew/homebrew-core#133887|https://gitlab.com/procps-ng/procps/-/blob/675246119df143a5f8ced6e3313edac6ccc3e222/src/ps/global.c#L226-L230])

Homebrew's behavior that passes {{COLUMNS}} environment variable seems a bug. 
But, {{server-stop}} script is not expected to be affected by such an 
environment variable. So, this also seemed to be a bug for me.

*Related issues*

This problem, KAFKA-4931 and KAFKA-4110 can also be fixed by introducing 
ProcessID file. But the three problem have different cause and can be thought 
separately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] satishd commented on pull request #14297: MINOR: Fix the TBRLMMRestart test.

2023-08-28 Thread via GitHub


satishd commented on PR #14297:
URL: https://github.com/apache/kafka/pull/14297#issuecomment-1695842125

   There are a few test failures that are unrelated to this change. Merging to 
trunk and 3.6 branches.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] satishd merged pull request #14297: MINOR: Fix the TBRLMMRestart test.

2023-08-28 Thread via GitHub


satishd merged PR #14297:
URL: https://github.com/apache/kafka/pull/14297


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mannoopj opened a new pull request, #14302: KAFKA-15412

2023-08-28 Thread via GitHub


mannoopj opened a new pull request, #14302:
URL: https://github.com/apache/kafka/pull/14302

   Reading an unknown version of quorum-state-file should trigger an error. 
Currently the only known version is 0. Reading any other version should cause 
an error. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante opened a new pull request, #14303: KAFKA-13327: Gracefully report connector validation errors instead of returning 500 responses

2023-08-28 Thread via GitHub


C0urante opened a new pull request, #14303:
URL: https://github.com/apache/kafka/pull/14303

   [Jira](https://issues.apache.org/jira/browse/KAFKA-13327)
   
   Background context: this is split off from 
https://github.com/apache/kafka/pull/11369, which addressed this issue and two 
others. Not only does this new PR fix the merge conflicts with its predecessor, 
it also simplifies the review process by addressing a single issue at a time.
   
   This change addresses an issue where some types of validation errors are 
reported via HTTP 500 responses, which is incorrect (the issue is not the fault 
of the server, but rather of the connector configuration) and can cover up 
other validation errors. Instead, these types of errors are now reported as 
part of a well-formed response body.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante commented on pull request #14303: KAFKA-13327: Gracefully report connector validation errors instead of returning 500 responses

2023-08-28 Thread via GitHub


C0urante commented on PR #14303:
URL: https://github.com/apache/kafka/pull/14303#issuecomment-1695943788

   @gharris1727 since you reviewed https://github.com/apache/kafka/pull/11369, 
would you be interested in taking a look at its new successor?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante commented on pull request #11369: KAFKA-13327, KAFKA-13328, KAFKA-13329: Clean up preflight connector validation

2023-08-28 Thread via GitHub


C0urante commented on PR #11369:
URL: https://github.com/apache/kafka/pull/11369#issuecomment-1695946733

   A lot of merge conflicts have accrued on this one. Instead of resolving them 
all at once, I've decided to split this PR out into three smaller PRs, which 
should also make the review process easier.
   
   The first of the three is https://github.com/apache/kafka/pull/14303; others 
to come soon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante closed pull request #11369: KAFKA-13327, KAFKA-13328, KAFKA-13329: Clean up preflight connector validation

2023-08-28 Thread via GitHub


C0urante closed pull request #11369: KAFKA-13327, KAFKA-13328, KAFKA-13329: 
Clean up preflight connector validation
URL: https://github.com/apache/kafka/pull/11369


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #14264: KAFKA-15403: Refactor @Test(expected) annotation with assertThrows

2023-08-28 Thread via GitHub


jolshan commented on code in PR #14264:
URL: https://github.com/apache/kafka/pull/14264#discussion_r1307621341


##
streams/src/test/java/org/apache/kafka/streams/processor/internals/StoreToProcessorContextAdapterTest.java:
##
@@ -26,10 +26,12 @@
 import org.easymock.MockType;
 import org.junit.Before;
 import org.junit.Test;
+import static org.junit.Assert.assertThrows;
 import org.junit.runner.RunWith;
 
 import java.time.Duration;
 
+

Review Comment:
   nit: can we remove this extra space here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on pull request #14264: KAFKA-15403: Refactor @Test(expected) annotation with assertThrows

2023-08-28 Thread via GitHub


jolshan commented on PR #14264:
URL: https://github.com/apache/kafka/pull/14264#issuecomment-1695959642

   @Taher-Ghaleb can you pull in the latest changes from master and resolve 
conflicts?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante commented on a diff in pull request #14303: KAFKA-13327: Gracefully report connector validation errors instead of returning 500 responses

2023-08-28 Thread via GitHub


C0urante commented on code in PR #14303:
URL: https://github.com/apache/kafka/pull/14303#discussion_r1307643119


##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java:
##
@@ -429,7 +429,9 @@ void enrich(ConfigDef newDef) {
 final ConfigDef.Validator typeValidator = 
ConfigDef.LambdaValidator.with(
 (String name, Object value) -> {
 validateProps(prefix);
-getConfigDefFromConfigProvidingClass(typeConfig, 
(Class) value);
+if (value != null) {

Review Comment:
   The value here is null if the class couldn't be loaded, in which case, it's 
not necessary to try to do any further kind of validation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-13329) Connect does not perform preflight validation for per-connector key and value converters

2023-08-28 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton updated KAFKA-13329:
--
Description: 
Users may specify a key and/or value converter class for their connector 
directly in the configuration for that connector. If this occurs, no preflight 
validation is performed to ensure that the specified converter is valid.

Unfortunately, the [Converter 
interface|https://github.com/apache/kafka/blob/4eb386f6e060e12e1940c0d780987e3a7c438d74/connect/api/src/main/java/org/apache/kafka/connect/storage/Converter.java]
 does not require converters to expose a {{ConfigDef}} (unlike the 
[HeaderConverter 
interface|https://github.com/apache/kafka/blob/4eb386f6e060e12e1940c0d780987e3a7c438d74/connect/api/src/main/java/org/apache/kafka/connect/storage/HeaderConverter.java#L48-L52],
 which does have that requirement), so it's unlikely that the configuration 
properties of the converter itself can be validated.

However, we can and should still validate that the converter class exists, can 
be instantiated (i.e., has a public, no-args constructor and is a concrete, 
non-abstract class), and implements the {{Converter}} interface.

*EDIT:* Since this ticket was originally filed, a {{Converter::config}} method 
was added in 
[KIP-769|https://cwiki.apache.org/confluence/display/KAFKA/KIP-769%3A+Connect+APIs+to+list+all+connector+plugins+and+retrieve+their+configuration+definitions].
 We can now utilize that config definition during preflight validation for 
connectors.

  was:
Users may specify a key and/or value converter class for their connector 
directly in the configuration for that connector. If this occurs, no preflight 
validation is performed to ensure that the specified converter is valid.

Unfortunately, the [Converter 
interface|https://github.com/apache/kafka/blob/4eb386f6e060e12e1940c0d780987e3a7c438d74/connect/api/src/main/java/org/apache/kafka/connect/storage/Converter.java]
 does not require converters to expose a {{ConfigDef}} (unlike the 
[HeaderConverter 
interface|https://github.com/apache/kafka/blob/4eb386f6e060e12e1940c0d780987e3a7c438d74/connect/api/src/main/java/org/apache/kafka/connect/storage/HeaderConverter.java#L48-L52],
 which does have that requirement), so it's unlikely that the configuration 
properties of the converter itself can be validated.

However, we can and should still validate that the converter class exists, can 
be instantiated (i.e., has a public, no-args constructor and is a concrete, 
non-abstract class), and implements the {{Converter}} interface.


> Connect does not perform preflight validation for per-connector key and value 
> converters
> 
>
> Key: KAFKA-13329
> URL: https://issues.apache.org/jira/browse/KAFKA-13329
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> Users may specify a key and/or value converter class for their connector 
> directly in the configuration for that connector. If this occurs, no 
> preflight validation is performed to ensure that the specified converter is 
> valid.
> Unfortunately, the [Converter 
> interface|https://github.com/apache/kafka/blob/4eb386f6e060e12e1940c0d780987e3a7c438d74/connect/api/src/main/java/org/apache/kafka/connect/storage/Converter.java]
>  does not require converters to expose a {{ConfigDef}} (unlike the 
> [HeaderConverter 
> interface|https://github.com/apache/kafka/blob/4eb386f6e060e12e1940c0d780987e3a7c438d74/connect/api/src/main/java/org/apache/kafka/connect/storage/HeaderConverter.java#L48-L52],
>  which does have that requirement), so it's unlikely that the configuration 
> properties of the converter itself can be validated.
> However, we can and should still validate that the converter class exists, 
> can be instantiated (i.e., has a public, no-args constructor and is a 
> concrete, non-abstract class), and implements the {{Converter}} interface.
> *EDIT:* Since this ticket was originally filed, a {{Converter::config}} 
> method was added in 
> [KIP-769|https://cwiki.apache.org/confluence/display/KAFKA/KIP-769%3A+Connect+APIs+to+list+all+connector+plugins+and+retrieve+their+configuration+definitions].
>  We can now utilize that config definition during preflight validation for 
> connectors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15411) DelegationTokenEndToEndAuthorizationWithOwnerTest is Flaky

2023-08-28 Thread Proven Provenzano (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759643#comment-17759643
 ] 

Proven Provenzano commented on KAFKA-15411:
---

Ran a loop and couldn't get it to fail after 25 iterations.

> DelegationTokenEndToEndAuthorizationWithOwnerTest is Flaky 
> ---
>
> Key: KAFKA-15411
> URL: https://issues.apache.org/jira/browse/KAFKA-15411
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: Proven Provenzano
>Assignee: Proven Provenzano
>Priority: Major
> Fix For: 3.6.0
>
>
> DelegationTokenEndToEndAuthorizationWithOwnerTest has become flaky since the 
> merge of delegation token support for KRaft.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15408) Restart failed tasks in Kafka Connect up to a configurable max-tries

2023-08-28 Thread Sagar Rao (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759648#comment-17759648
 ] 

Sagar Rao commented on KAFKA-15408:
---

Hi, the steps are outline here: 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

> Restart failed tasks in Kafka Connect up to a configurable max-tries
> 
>
> Key: KAFKA-15408
> URL: https://issues.apache.org/jira/browse/KAFKA-15408
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Patrick Pang
>Priority: Major
>  Labels: needs-kip
>
> h2. Issue
> Currently, Kafka Connect just reports failed tasks on REST API, with the 
> error. Users are expected to monitor the status and restart individual 
> connectors if there is transient errors. Unfortunately these are common for 
> database connectors, e.g. transient connection error, flip of DNS, database 
> downtime, etc. Kafka Connect silently failing due to these scenarios would 
> lead to stale data downstream.
> h2. Proposal
> Kafka Connect should be able to restart failed tasks automatically, up to a 
> configurable max-tries.
> h2. Prior arts
>  * 
> [https://github.com/strimzi/proposals/blob/main/007-restarting-kafka-connect-connectors-and-tasks.md]
>  
>  * 
> [https://docs.aiven.io/docs/products/kafka/kafka-connect/howto/enable-automatic-restart]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15372) MM2 rolling restart can drop configuration changes silently

2023-08-28 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris reassigned KAFKA-15372:
---

Assignee: Greg Harris

> MM2 rolling restart can drop configuration changes silently
> ---
>
> Key: KAFKA-15372
> URL: https://issues.apache.org/jira/browse/KAFKA-15372
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Daniel Urban
>Assignee: Greg Harris
>Priority: Major
> Fix For: 3.6.0
>
>
> When MM2 is restarted, it tries to update the Connector configuration in all 
> flows. This is a one-time trial, and fails if the Connect worker is not the 
> leader of the group.
> In a distributed setup and with a rolling restart, it is possible that for a 
> specific flow, the Connect worker of the just restarted MM2 instance is not 
> the leader, meaning that Connector configurations can get dropped.
> For example, assuming 2 MM2 instances, and one flow A->B:
>  # MM2 instance 1 is restarted, the worker inside MM2 instance 2 becomes the 
> leader of A->B Connect group.
>  # MM2 instance 1 tries to update the Connector configurations, but fails 
> (instance 2 has the leader, not instance 1)
>  # MM2 instance 2 is restarted, leadership moves to worker in MM2 instance 1
>  # MM2 instance 2 tries to update the Connector configurations, but fails
> At this point, the configuration changes before the restart are never 
> applied. Many times, this can also happen silently, without any indication.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14517) Implement regex subscriptions

2023-08-28 Thread Jimmy Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759655#comment-17759655
 ] 

Jimmy Wang commented on KAFKA-14517:


 Can I pick up this? Interested in this issue and trying to work on this.

> Implement regex subscriptions
> -
>
> Key: KAFKA-14517
> URL: https://issues.apache.org/jira/browse/KAFKA-14517
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15411) DelegationTokenEndToEndAuthorizationWithOwnerTest is Flaky

2023-08-28 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya updated KAFKA-15411:
-
Labels: flaky-test  (was: )

> DelegationTokenEndToEndAuthorizationWithOwnerTest is Flaky 
> ---
>
> Key: KAFKA-15411
> URL: https://issues.apache.org/jira/browse/KAFKA-15411
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: Proven Provenzano
>Assignee: Proven Provenzano
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.6.0
>
>
> DelegationTokenEndToEndAuthorizationWithOwnerTest has become flaky since the 
> merge of delegation token support for KRaft.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] philipnee commented on a diff in pull request #14118: KAFKA-14875: Implement wakeup

2023-08-28 Thread via GitHub


philipnee commented on code in PR #14118:
URL: https://github.com/apache/kafka/pull/14118#discussion_r1307713180


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/WakeupTrigger.java:
##
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.clients.consumer.internals;
+
+import org.apache.kafka.common.KafkaException;
+import org.apache.kafka.common.errors.WakeupException;
+
+import java.util.Objects;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * Ensures blocking APIs can be woken up by the consumer.wakeup().
+ */
+public class WakeupTrigger {
+private AtomicReference pendingTask = new 
AtomicReference<>(null);
+
+/*
+Wakeup a pending task.  If there isn't any pending task, return a 
WakedupFuture, so that the subsequent call
+would know wakeup was previously called.
+
+If there are active taks, complete it with WakeupException, then unset 
pending task (return null here.
+If the current task has already been wakedup, do nothing.
+ */
+public void wakeup() {
+pendingTask.getAndUpdate(task -> {
+if (task == null) {
+return new WakedupFuture();

Review Comment:
   Hey Jun - For the blocking calls (syncCommit for example), the future should 
have been cleared in the finally block. I didn't implement wakeup for the poll 
method because it is not yet ready at the state of the current trunk.  (I'm not 
sure if i'm answering your question)
   
   Basically - the wakeup future should be cleared in the finally blocks for 
the blocking APIs, see commitSync. Poll API is still incomplete so I didn't 
clear the future there.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on a diff in pull request #14118: KAFKA-14875: Implement wakeup

2023-08-28 Thread via GitHub


philipnee commented on code in PR #14118:
URL: https://github.com/apache/kafka/pull/14118#discussion_r1307729523


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/WakeupTrigger.java:
##
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.clients.consumer.internals;
+
+import org.apache.kafka.common.KafkaException;
+import org.apache.kafka.common.errors.WakeupException;
+
+import java.util.Objects;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * Ensures blocking APIs can be woken up by the consumer.wakeup().
+ */
+public class WakeupTrigger {
+private AtomicReference pendingTask = new 
AtomicReference<>(null);
+
+/*
+  Wakeup a pending task.  If there isn't any pending task, return a 
WakedupFuture, so that the subsequent call
+  would know wakeup was previously called.
+
+  If there are active tasks, complete it with WakeupException, then unset 
pending task (return null here.
+  If the current task has already been woken-up, do nothing.
+ */
+public void wakeup() {
+pendingTask.getAndUpdate(task -> {
+if (task == null) {
+return new WakedupFuture();
+} else if (task instanceof ActiveFuture) {
+ActiveFuture active = (ActiveFuture) task;
+active.future().completeExceptionally(new WakeupException());
+return null;
+} else {
+return task;
+}
+});
+}
+
+/*
+If there is no pending task, set the pending task active.
+If wakeup was called before setting an active task, the current task will 
complete exceptionally with
+WakeupException right
+away.
+if there is an active task, throw exception.
+ */
+public  CompletableFuture setActiveTask(final CompletableFuture 
currentTask) {
+Objects.requireNonNull(currentTask, "currentTask cannot be null");
+pendingTask.getAndUpdate(task -> {
+if (task == null) {
+return new ActiveFuture(currentTask);
+} else if (task instanceof WakedupFuture) {
+currentTask.completeExceptionally(new WakeupException());
+return null;
+}
+// last active state is still active
+throw new KafkaException("Last active task is still active");
+});
+return currentTask;
+}
+
+public void clearActiveTask() {
+pendingTask.getAndUpdate(task -> {
+if (task == null) {
+return null;
+} else if (task instanceof ActiveFuture) {
+return null;
+}
+return task;
+});
+}
+
+Wakeupable getPendingTask() {
+return pendingTask.get();
+}
+
+interface Wakeupable { }
+
+static class ActiveFuture implements Wakeupable {
+private final CompletableFuture future;
+
+public ActiveFuture(final CompletableFuture future) {
+this.future = future;
+}
+
+public CompletableFuture future() {
+return future;
+}
+}
+
+static class WakedupFuture implements Wakeupable { }

Review Comment:
   Maybe just WakeupFuture? I guess it is a future for waking up a call.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Taher-Ghaleb commented on pull request #14264: KAFKA-15403: Refactor @Test(expected) annotation with assertThrows

2023-08-28 Thread via GitHub


Taher-Ghaleb commented on PR #14264:
URL: https://github.com/apache/kafka/pull/14264#issuecomment-1696099110

   Hi @jolshan. I resolved the conflict and removed the extra space. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] gharris1727 commented on a diff in pull request #14303: KAFKA-13327: Gracefully report connector validation errors instead of returning 500 responses

2023-08-28 Thread via GitHub


gharris1727 commented on code in PR #14303:
URL: https://github.com/apache/kafka/pull/14303#discussion_r1307738833


##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java:
##
@@ -429,7 +429,9 @@ void enrich(ConfigDef newDef) {
 final ConfigDef.Validator typeValidator = 
ConfigDef.LambdaValidator.with(
 (String name, Object value) -> {
 validateProps(prefix);
-getConfigDefFromConfigProvidingClass(typeConfig, 
(Class) value);
+if (value != null) {

Review Comment:
   Ah, this hides the exception message from "Not a (something)" and only shows 
the "Missing required configuration (name) which has no default value". I think 
that is reasonable.
   
   The other call-site for this method is ConnectorConfig.EnrichablePlugin 
which swallows this error on the validate() code path and propagates them when 
instantiating the ConnectorConfig. Instantiating the ConnectorConfig will also 
throw the "Missing required configuration" error, so it is not necessary to 
throw the error.
   
   I think you could safely change the getConfigDefFromConfigProvidingClass 
implementation to return an empty stream when the value is null, rather than 
throwing an exception. I don't think this is necessary, but maybe it keeps 
these two code paths more similar.



##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/SinkConnectorConfig.java:
##
@@ -90,50 +93,97 @@ public SinkConnectorConfig(Plugins plugins, Map props) {
  * @param props sink configuration properties
  */
 public static void validate(Map props) {
-final boolean hasTopicsConfig = hasTopicsConfig(props);
-final boolean hasTopicsRegexConfig = hasTopicsRegexConfig(props);
-final boolean hasDlqTopicConfig = hasDlqTopicConfig(props);
+validate(
+props,
+error -> {
+throw new ConfigException(error.property, error.value, 
error.errorMessage);
+}
+);
+}
+
+/**
+ * Perform preflight validation for the sink-specific properties for a 
connector.
+ *
+ * @param props   the configuration for the sink connector
+ * @param validatedConfig any already-known {@link ConfigValue validation 
results} for the configuration.
+ *May be empty, but may not be null. Any 
configuration errors discovered by this method will
+ *be {@link ConfigValue#addErrorMessage(String) 
added} to a value in this map, adding a new
+ *entry if one for the problematic property does 
not already exist.
+ */
+public static void validate(Map props, Map validatedConfig) {
+validate(props, error -> addErrorMessage(validatedConfig, error));
+}
+
+private static void validate(Map props, 
Consumer onError) {
+final String topicsList = props.get(TOPICS_CONFIG);
+final String topicsRegex = props.get(TOPICS_REGEX_CONFIG);
+final String dlqTopic = props.getOrDefault(DLQ_TOPIC_NAME_CONFIG, 
"").trim();
+final boolean hasTopicsConfig = !Utils.isBlank(topicsList);
+final boolean hasTopicsRegexConfig = !Utils.isBlank(topicsRegex);
+final boolean hasDlqTopicConfig = !Utils.isBlank(dlqTopic);
 
 if (hasTopicsConfig && hasTopicsRegexConfig) {
-throw new ConfigException(SinkTask.TOPICS_CONFIG + " and " + 
SinkTask.TOPICS_REGEX_CONFIG +
-" are mutually exclusive options, but both are set.");
+String errorMessage = TOPICS_CONFIG + " and " + 
TOPICS_REGEX_CONFIG + " are mutually exclusive options, but both are set.";
+onError.accept(new ConfigError(TOPICS_CONFIG, topicsList, 
errorMessage));
+onError.accept(new ConfigError(TOPICS_REGEX_CONFIG, topicsRegex, 
errorMessage));
 }
 
 if (!hasTopicsConfig && !hasTopicsRegexConfig) {
-throw new ConfigException("Must configure one of " +
-SinkTask.TOPICS_CONFIG + " or " + 
SinkTask.TOPICS_REGEX_CONFIG);
+String errorMessage = "Must configure one of " + TOPICS_CONFIG + " 
or " + TOPICS_REGEX_CONFIG;
+onError.accept(new ConfigError(TOPICS_CONFIG, topicsList, 
errorMessage));
+onError.accept(new ConfigError(TOPICS_REGEX_CONFIG, topicsRegex, 
errorMessage));
 }
 
 if (hasDlqTopicConfig) {
-String dlqTopic = props.get(DLQ_TOPIC_NAME_CONFIG).trim();
 if (hasTopicsConfig) {
 List topics = parseTopicsList(props);
 if (topics.contains(dlqTopic)) {
-throw new ConfigException(String.format("The DLQ topic 
'%s' may not be included in the list of "
-+ "topics ('%s=%s') consumed by the connector", 
dlqTopic, SinkTask.TOPICS_REGEX_CONFIG, topics));
+  

[GitHub] [kafka] dopuskh3 commented on a diff in pull request #13561: KAFKA-14888: Added remote log segments retention functionality based on time and size.

2023-08-28 Thread via GitHub


dopuskh3 commented on code in PR #13561:
URL: https://github.com/apache/kafka/pull/13561#discussion_r1307788374


##
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##
@@ -761,11 +784,385 @@ public void run() {
 }
 }
 
+public void handleLogStartOffsetUpdate(TopicPartition topicPartition, 
long remoteLogStartOffset) {
+if (isLeader()) {
+logger.debug("Updating {} with remoteLogStartOffset: {}", 
topicPartition, remoteLogStartOffset);
+updateRemoteLogStartOffset.accept(topicPartition, 
remoteLogStartOffset);
+}
+}
+
+class RemoteLogRetentionHandler {
+
+private final Optional retentionSizeData;
+private final Optional retentionTimeData;
+
+private long remainingBreachedSize;
+
+private OptionalLong logStartOffset = OptionalLong.empty();
+
+public RemoteLogRetentionHandler(Optional 
retentionSizeData, Optional retentionTimeData) {
+this.retentionSizeData = retentionSizeData;
+this.retentionTimeData = retentionTimeData;
+remainingBreachedSize = retentionSizeData.map(sizeData -> 
sizeData.remainingBreachedSize).orElse(0L);
+}
+
+private boolean 
deleteRetentionSizeBreachedSegments(RemoteLogSegmentMetadata metadata) throws 
RemoteStorageException, ExecutionException, InterruptedException {
+if (!retentionSizeData.isPresent()) {
+return false;
+}
+
+boolean isSegmentDeleted = deleteRemoteLogSegment(metadata, x 
-> {
+// Assumption that segments contain size >= 0
+if (remainingBreachedSize > 0) {
+long remainingBytes = remainingBreachedSize - 
x.segmentSizeInBytes();
+if (remainingBytes >= 0) {
+remainingBreachedSize = remainingBytes;
+return true;
+}
+}
+
+return false;
+});
+if (isSegmentDeleted) {
+logStartOffset = OptionalLong.of(metadata.endOffset() + 1);
+logger.info("Deleted remote log segment {} due to 
retention size {} breach. Log size after deletion will be {}.",
+metadata.remoteLogSegmentId(), 
retentionSizeData.get().retentionSize, remainingBreachedSize + 
retentionSizeData.get().retentionSize);
+}
+return isSegmentDeleted;
+}
+
+public boolean 
deleteRetentionTimeBreachedSegments(RemoteLogSegmentMetadata metadata)
+throws RemoteStorageException, ExecutionException, 
InterruptedException {
+if (!retentionTimeData.isPresent()) {
+return false;
+}
+
+boolean isSegmentDeleted = deleteRemoteLogSegment(metadata,
+x -> x.maxTimestampMs() <= 
retentionTimeData.get().cleanupUntilMs);
+if (isSegmentDeleted) {
+remainingBreachedSize = Math.max(0, remainingBreachedSize 
- metadata.segmentSizeInBytes());
+// It is fine to have logStartOffset as 
`metadata.endOffset() + 1` as the segment offset intervals
+// are ascending with in an epoch.
+logStartOffset = OptionalLong.of(metadata.endOffset() + 1);
+logger.info("Deleted remote log segment {} due to 
retention time {}ms breach based on the largest record timestamp in the 
segment",
+metadata.remoteLogSegmentId(), 
retentionTimeData.get().retentionMs);
+}
+return isSegmentDeleted;
+}
+
+private boolean 
deleteLogStartOffsetBreachedSegments(RemoteLogSegmentMetadata metadata, long 
startOffset)
+throws RemoteStorageException, ExecutionException, 
InterruptedException {
+boolean isSegmentDeleted = deleteRemoteLogSegment(metadata, x 
-> startOffset > x.endOffset());
+if (isSegmentDeleted && retentionSizeData.isPresent()) {
+remainingBreachedSize = Math.max(0, remainingBreachedSize 
- metadata.segmentSizeInBytes());
+logger.info("Deleted remote log segment {} due to log 
start offset {} breach", metadata.remoteLogSegmentId(), startOffset);
+}
+
+return isSegmentDeleted;
+}
+
+// It removes the segments beyond the current leader's earliest 
epoch. Those segments are considered as
+// unreferenced because they are not part of the current leader 
epoch lineage.
+private boolean 
deleteLogSegmentsDueToLeaderEpochCacheTruncation(EpochEntry earliestEpochEntry, 
RemoteLogSegmentMetadata metadata) throws RemoteStorageExcep

[GitHub] [kafka] dopuskh3 commented on a diff in pull request #13561: KAFKA-14888: Added remote log segments retention functionality based on time and size.

2023-08-28 Thread via GitHub


dopuskh3 commented on code in PR #13561:
URL: https://github.com/apache/kafka/pull/13561#discussion_r1307788374


##
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##
@@ -761,11 +784,385 @@ public void run() {
 }
 }
 
+public void handleLogStartOffsetUpdate(TopicPartition topicPartition, 
long remoteLogStartOffset) {
+if (isLeader()) {
+logger.debug("Updating {} with remoteLogStartOffset: {}", 
topicPartition, remoteLogStartOffset);
+updateRemoteLogStartOffset.accept(topicPartition, 
remoteLogStartOffset);
+}
+}
+
+class RemoteLogRetentionHandler {
+
+private final Optional retentionSizeData;
+private final Optional retentionTimeData;
+
+private long remainingBreachedSize;
+
+private OptionalLong logStartOffset = OptionalLong.empty();
+
+public RemoteLogRetentionHandler(Optional 
retentionSizeData, Optional retentionTimeData) {
+this.retentionSizeData = retentionSizeData;
+this.retentionTimeData = retentionTimeData;
+remainingBreachedSize = retentionSizeData.map(sizeData -> 
sizeData.remainingBreachedSize).orElse(0L);
+}
+
+private boolean 
deleteRetentionSizeBreachedSegments(RemoteLogSegmentMetadata metadata) throws 
RemoteStorageException, ExecutionException, InterruptedException {
+if (!retentionSizeData.isPresent()) {
+return false;
+}
+
+boolean isSegmentDeleted = deleteRemoteLogSegment(metadata, x 
-> {
+// Assumption that segments contain size >= 0
+if (remainingBreachedSize > 0) {
+long remainingBytes = remainingBreachedSize - 
x.segmentSizeInBytes();
+if (remainingBytes >= 0) {
+remainingBreachedSize = remainingBytes;
+return true;
+}
+}
+
+return false;
+});
+if (isSegmentDeleted) {
+logStartOffset = OptionalLong.of(metadata.endOffset() + 1);
+logger.info("Deleted remote log segment {} due to 
retention size {} breach. Log size after deletion will be {}.",
+metadata.remoteLogSegmentId(), 
retentionSizeData.get().retentionSize, remainingBreachedSize + 
retentionSizeData.get().retentionSize);
+}
+return isSegmentDeleted;
+}
+
+public boolean 
deleteRetentionTimeBreachedSegments(RemoteLogSegmentMetadata metadata)
+throws RemoteStorageException, ExecutionException, 
InterruptedException {
+if (!retentionTimeData.isPresent()) {
+return false;
+}
+
+boolean isSegmentDeleted = deleteRemoteLogSegment(metadata,
+x -> x.maxTimestampMs() <= 
retentionTimeData.get().cleanupUntilMs);
+if (isSegmentDeleted) {
+remainingBreachedSize = Math.max(0, remainingBreachedSize 
- metadata.segmentSizeInBytes());
+// It is fine to have logStartOffset as 
`metadata.endOffset() + 1` as the segment offset intervals
+// are ascending with in an epoch.
+logStartOffset = OptionalLong.of(metadata.endOffset() + 1);
+logger.info("Deleted remote log segment {} due to 
retention time {}ms breach based on the largest record timestamp in the 
segment",
+metadata.remoteLogSegmentId(), 
retentionTimeData.get().retentionMs);
+}
+return isSegmentDeleted;
+}
+
+private boolean 
deleteLogStartOffsetBreachedSegments(RemoteLogSegmentMetadata metadata, long 
startOffset)
+throws RemoteStorageException, ExecutionException, 
InterruptedException {
+boolean isSegmentDeleted = deleteRemoteLogSegment(metadata, x 
-> startOffset > x.endOffset());
+if (isSegmentDeleted && retentionSizeData.isPresent()) {
+remainingBreachedSize = Math.max(0, remainingBreachedSize 
- metadata.segmentSizeInBytes());
+logger.info("Deleted remote log segment {} due to log 
start offset {} breach", metadata.remoteLogSegmentId(), startOffset);
+}
+
+return isSegmentDeleted;
+}
+
+// It removes the segments beyond the current leader's earliest 
epoch. Those segments are considered as
+// unreferenced because they are not part of the current leader 
epoch lineage.
+private boolean 
deleteLogSegmentsDueToLeaderEpochCacheTruncation(EpochEntry earliestEpochEntry, 
RemoteLogSegmentMetadata metadata) throws RemoteStorageExcep

[GitHub] [kafka] C0urante commented on a diff in pull request #14303: KAFKA-13327: Gracefully report connector validation errors instead of returning 500 responses

2023-08-28 Thread via GitHub


C0urante commented on code in PR #14303:
URL: https://github.com/apache/kafka/pull/14303#discussion_r1307910654


##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java:
##
@@ -429,7 +429,9 @@ void enrich(ConfigDef newDef) {
 final ConfigDef.Validator typeValidator = 
ConfigDef.LambdaValidator.with(
 (String name, Object value) -> {
 validateProps(prefix);
-getConfigDefFromConfigProvidingClass(typeConfig, 
(Class) value);
+if (value != null) {

Review Comment:
   > I think you could safely change the getConfigDefFromConfigProvidingClass 
implementation to return an empty stream when the value is null, rather than 
throwing an exception.
   
   Are you sure? It looks like the error swallowing in 
`ConfigDef.EnrichablePlugin::populateConfigDef` takes place conditionally, and 
we still do throw exceptions that originate from the 
`getConfigDefFromConfigProvidingClass` sometimes.



##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/SinkConnectorConfig.java:
##
@@ -90,50 +93,97 @@ public SinkConnectorConfig(Plugins plugins, Map props) {
  * @param props sink configuration properties
  */
 public static void validate(Map props) {
-final boolean hasTopicsConfig = hasTopicsConfig(props);
-final boolean hasTopicsRegexConfig = hasTopicsRegexConfig(props);
-final boolean hasDlqTopicConfig = hasDlqTopicConfig(props);
+validate(
+props,
+error -> {
+throw new ConfigException(error.property, error.value, 
error.errorMessage);
+}
+);
+}
+
+/**
+ * Perform preflight validation for the sink-specific properties for a 
connector.
+ *
+ * @param props   the configuration for the sink connector
+ * @param validatedConfig any already-known {@link ConfigValue validation 
results} for the configuration.
+ *May be empty, but may not be null. Any 
configuration errors discovered by this method will
+ *be {@link ConfigValue#addErrorMessage(String) 
added} to a value in this map, adding a new
+ *entry if one for the problematic property does 
not already exist.
+ */
+public static void validate(Map props, Map validatedConfig) {
+validate(props, error -> addErrorMessage(validatedConfig, error));
+}
+
+private static void validate(Map props, 
Consumer onError) {
+final String topicsList = props.get(TOPICS_CONFIG);
+final String topicsRegex = props.get(TOPICS_REGEX_CONFIG);
+final String dlqTopic = props.getOrDefault(DLQ_TOPIC_NAME_CONFIG, 
"").trim();
+final boolean hasTopicsConfig = !Utils.isBlank(topicsList);
+final boolean hasTopicsRegexConfig = !Utils.isBlank(topicsRegex);
+final boolean hasDlqTopicConfig = !Utils.isBlank(dlqTopic);
 
 if (hasTopicsConfig && hasTopicsRegexConfig) {
-throw new ConfigException(SinkTask.TOPICS_CONFIG + " and " + 
SinkTask.TOPICS_REGEX_CONFIG +
-" are mutually exclusive options, but both are set.");
+String errorMessage = TOPICS_CONFIG + " and " + 
TOPICS_REGEX_CONFIG + " are mutually exclusive options, but both are set.";
+onError.accept(new ConfigError(TOPICS_CONFIG, topicsList, 
errorMessage));
+onError.accept(new ConfigError(TOPICS_REGEX_CONFIG, topicsRegex, 
errorMessage));
 }
 
 if (!hasTopicsConfig && !hasTopicsRegexConfig) {
-throw new ConfigException("Must configure one of " +
-SinkTask.TOPICS_CONFIG + " or " + 
SinkTask.TOPICS_REGEX_CONFIG);
+String errorMessage = "Must configure one of " + TOPICS_CONFIG + " 
or " + TOPICS_REGEX_CONFIG;
+onError.accept(new ConfigError(TOPICS_CONFIG, topicsList, 
errorMessage));
+onError.accept(new ConfigError(TOPICS_REGEX_CONFIG, topicsRegex, 
errorMessage));
 }
 
 if (hasDlqTopicConfig) {
-String dlqTopic = props.get(DLQ_TOPIC_NAME_CONFIG).trim();
 if (hasTopicsConfig) {
 List topics = parseTopicsList(props);
 if (topics.contains(dlqTopic)) {
-throw new ConfigException(String.format("The DLQ topic 
'%s' may not be included in the list of "
-+ "topics ('%s=%s') consumed by the connector", 
dlqTopic, SinkTask.TOPICS_REGEX_CONFIG, topics));
+String errorMessage = String.format(
+"The DLQ topic '%s' may not be included in the 
list of topics ('%s=%s') consumed by the connector",
+dlqTopic, TOPICS_CONFIG, topics
+);
+onError.accept(new ConfigError(TOPICS_CONFIG, topicsList, 
errorMessage));
   

[GitHub] [kafka] C0urante opened a new pull request, #14304: KAFKA-13328, KAFKA-13329 (1): Add preflight validations for key, value, and header converter classes

2023-08-28 Thread via GitHub


C0urante opened a new pull request, #14304:
URL: https://github.com/apache/kafka/pull/14304

   [Jira 1](https://issues.apache.org/jira/browse/KAFKA-13328), [Jira 
2](https://issues.apache.org/jira/browse/KAFKA-13329)
   
   Adds preflight validation checks for key, value, and header converter 
classes specified in connector configurations. For each converter type, these 
checks verify that:
   - The class has a public, no-args constructor
   - The class can be instantiated without an exception being thrown
   - The class is not abstract
   - - In this case, a list of possible child classes is provided in a 
user-friendly error message
   - The class implements the expected interface
   
   There are two components to the linked Jira tickets: this validation (which 
is fairly straightforward), and more sophisticated logic that leverages 
`ConfigDef` objects provided by these plugin classes. For ease of review, I'm 
planning on filing two PRs for these tickets, but instead of grouping by ticket 
(which would include all key/value converter changes in one ticket and all 
header converter changes in another), I think it might be easier to group by 
the kind of validation we're performing. If that's acceptable, the next PR will 
add `ConfigDef`-based validation for key, value, and header converters.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] tanay27 commented on pull request #14229: KAFKA-15224: automating version change

2023-08-28 Thread via GitHub


tanay27 commented on PR #14229:
URL: https://github.com/apache/kafka/pull/14229#issuecomment-1696389809

   > ```python3 version_change.py --version 3.6.2
   WARNING: Couldn't write lextab module . Won't 
overwrite existing lextab module
   WARNING: yacc table file version is out of date
   WARNING: Token 'BLOCK_COMMENT' defined, but not used
   WARNING: Token 'CLASS' defined, but not used
   WARNING: Token 'CONST' defined, but not used
   WARNING: Token 'ENUM' defined, but not used
   WARNING: Token 'EXPORT' defined, but not used
   WARNING: Token 'EXTENDS' defined, but not used
   WARNING: Token 'IMPORT' defined, but not used
   WARNING: Token 'LINE_COMMENT' defined, but not used
   WARNING: Token 'LINE_TERMINATOR' defined, but not used
   WARNING: Token 'SUPER' defined, but not used
   WARNING: There are 10 unused tokens
   WARNING: Couldn't create . Won't 
overwrite existing tabmodule```
   
   This is expected @divijvaidya , this doesn't change the functionality of the 
library.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] tanay27 commented on pull request #14229: KAFKA-15224: automating version change

2023-08-28 Thread via GitHub


tanay27 commented on PR #14229:
URL: https://github.com/apache/kafka/pull/14229#issuecomment-1696392904

   > Thank you for making this change @tanay27. Couple of remaining actions:
   > 
   > 1. Please add a README.md in the release folder mentioning what you added 
in the description here, i.e. to use `pip install -r requirements.txt` to 
prepare the dependencies etc.
   > 2. Would you suggest using virtualenv in the release folder as well? If 
yes, can you please add that to the README too?
   > 3. Please move release.py into the release folder as well.
   
   I can add the readme to this.
   
   For `release.py`, we have to make multiple changes for the paths since it's 
all hardcoded. I would suggest we can open another ticket to cover that?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] gharris1727 commented on a diff in pull request #14303: KAFKA-13327: Gracefully report connector validation errors instead of returning 500 responses

2023-08-28 Thread via GitHub


gharris1727 commented on code in PR #14303:
URL: https://github.com/apache/kafka/pull/14303#discussion_r1307943936


##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java:
##
@@ -429,7 +429,9 @@ void enrich(ConfigDef newDef) {
 final ConfigDef.Validator typeValidator = 
ConfigDef.LambdaValidator.with(
 (String name, Object value) -> {
 validateProps(prefix);
-getConfigDefFromConfigProvidingClass(typeConfig, 
(Class) value);
+if (value != null) {

Review Comment:
   > It looks like the error swallowing in 
ConfigDef.EnrichablePlugin::populateConfigDef takes place conditionally
   
   Yeah, the condition depends on whether validation is being executed or 
whether the full ConnectorConfig is being constructed.
   
   > we still do throw exceptions that originate from the 
getConfigDefFromConfigProvidingClass sometimes.
   
   I think the other exceptions for non-null classes are fine to leave as-is. I 
think we could only reasonably change the null-class behavior.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] junrao commented on a diff in pull request #14118: KAFKA-14875: Implement wakeup

2023-08-28 Thread via GitHub


junrao commented on code in PR #14118:
URL: https://github.com/apache/kafka/pull/14118#discussion_r1308011131


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/WakeupTrigger.java:
##
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.clients.consumer.internals;
+
+import org.apache.kafka.common.KafkaException;
+import org.apache.kafka.common.errors.WakeupException;
+
+import java.util.Objects;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * Ensures blocking APIs can be woken up by the consumer.wakeup().
+ */
+public class WakeupTrigger {
+private AtomicReference pendingTask = new 
AtomicReference<>(null);
+
+/*
+Wakeup a pending task.  If there isn't any pending task, return a 
WakedupFuture, so that the subsequent call
+would know wakeup was previously called.
+
+If there are active taks, complete it with WakeupException, then unset 
pending task (return null here.
+If the current task has already been wakedup, do nothing.
+ */
+public void wakeup() {
+pendingTask.getAndUpdate(task -> {
+if (task == null) {
+return new WakedupFuture();

Review Comment:
   @philipnee : I was asking a slightly different question. Consider that an 
app calls `Consumer.wakeup` when there is no pending blocking operation. Then 
the app calls `Consumer.poll` before any blocking operation is called. At this 
point, a `WakedupFuture` is still left in `WakeupTrigger.pendingTask`. Some 
point later, a blocking operation is called. It will immediately throw an 
exception because of `WakedupFuture`, which seems unexpected.  So, should 
`Consumer.poll` clear `WakedupFuture` in `WakeupTrigger.pendingTask`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan closed pull request #10071: KAFKA-12298: Create LeaderAndIsrRequestBenchmark

2023-08-28 Thread via GitHub


jolshan closed pull request #10071: KAFKA-12298: Create 
LeaderAndIsrRequestBenchmark
URL: https://github.com/apache/kafka/pull/10071


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan closed pull request #10461: KAFKA-12603: Add benchmarks for handleFetchRequest and FetchContext

2023-08-28 Thread via GitHub


jolshan closed pull request #10461: KAFKA-12603: Add benchmarks for 
handleFetchRequest and FetchContext
URL: https://github.com/apache/kafka/pull/10461


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >